Tech Talk iconTech Talk

Packaging: The Red-Headed Stepchild of the Semiconductor Industry

The time honored trope of teen movies is the mousy nobody who finally takes off her glasses and lets down her ponytail, and suddenly she's the prom queen. In the semiconductor industry's version of that movie, that girl's name is Packaging.

Packaging was the undercurrent of much of this year's International Electron Devices Meeting. No one could have put it better than Semiconductor Industry Association vice president Pushkar Apte, who stated that "packaging is the red-headed stepchild" of the industry. Until now, anyway. Two major forces are driving the attention back to packaging: Medical applications and the end of scaling.

As an example of the former, at IEDM, Purdue University researchers showed implantable wireless transponders that can monitor radiation doses received during cancer treatments. The miniature transponders would be implanted near a tumor during radiation therapy. The part is a prototype, as far as I understand, and the Purdue researchers are working with the radiation oncology department at the University of Texas Southwestern Medical Center. There, doctors can give them an idea of what's needed in terms of packaging. But what happens when a part like this transitions from prototype to off-the-shelf? It's going to need innovative packaging. That's what.

The second driver is the ever-impending end of Moore's law. It's no secret that engineers are running out of options with transistor scaling. The industry is nominally at the 32nm process—which means Intel is about to start shipping microprocessors with 32-nm feature sizes. No one else is. (Intel will soon release their 32-nm processor, called Westmere.)

But other chipmakers are struggling to keep up with that roadmap. AMD only released its first 45-nm processors this past January. According to EETimes, "a period of more than two years is now expected between the introduction of AMD's 32nm technology and the previous 45nm node first seen in late 2008.”

TSMC is also lagging behind Intel but ahead of AMD with 32-nm process technology, which it expects to have ready in 2010. (For more on where everyone stands with 32-nm process technology, read this exhaustively researched EE Times piece.)

Why is it so hard to scale? Researchers agree that the industry has hit a brick wall because scaling transistors to ever-tinier dimensions causes reliability to fall steeply. Researchers who didn't want to go on the record told me, and at a short course on Sunday, attendees repeatedly expressed frustration at the difficulties of further scaling.

3D integration looks like a viable alternative for chipmakers who don't want to bang their heads against Moore's law in the quest for 22-nm process technology. 3D integration boils down to this: stack 'em vertically instead of squeezing more and tinier transistors on a planar surface. It means that with a fixed transistor and die size, you can still add processors and memory. Johns Hopkins University electrical engineering professor Andreas Andreou estimated that by the time the industry arrives at 22-nm process technology, it would be more effective to stack four 22-nm chips than press on to the 11-nm node."The gold rush of shrinking will be replaced by 3D," he predicted.

Even Nvidia is on the 3D bandwagon: John Chen said in his keynote presentation that graphics processors can’t make progress unless they go 3D. Two IEDM sessions were devoted entirely to advanced 3D technology and processing for memory and logic. In one session chaired by researchers from IBM and Samsung, CEA-LETI researchers threw down the gauntlet: For the first time, they said, 3D CMOS integration can be considered a viable alternative to sub-22nm technology nodes. TSMC researchers positioned 3D integration as healthy competition for the 28-nm node. IMEC, Fujitsu, and ST Micro presented their research into making 3D work.

Researchers are divided on the severity of the issues that plague 3D integration: heat, alignment, and metal contamination still remain, but according to Hopkins professor Andreou and NEC researcher Yoshihiro Hayashi, heat is a red herring: any number of innovations will easily solve the heat problem by the time 3D packaged wafers are ready to hit the shelves (among these, using through-silicon vias to transport the excess heat to the heat sink, but that’s a whole other story).

In any case, the general assumption is that you can work around the Moore's law limitations by doing other things, like 3D integration. At the very least, 3D chip integration might buy the industry a little time so that researchers can get their ducks in a row with promising technologies like extreme ultraviolet lithography, multigate transistors, and 2nd-gen high-k metal gate technology.

But we're not at the prom yet. (We’re still watching the part of the movie where the best friend realizes that our girl Packaging needs a haircut and a full face of makeup.)

You’ll note that most of the problems researchers described are about packaging. Many ingredients in 3D stacks rely on innovations in packaging to make them viable. To solve the heat problem, for example, researchers are assuming that new ways of diverting excess heat to the heat sink will be developed. But who's going to figure that out? Are through-silicon vias part of the chip or part of the packaging? What about those heat sinks?

3D chips require new kinds of packaging. And new kinds of packaging require innovation. And that, at last, is the crux of the problem: innovations in packaging? Whose problem is that?

The semiconductor industry has disintegrated, over the past decades, into many horizontal layers. Consider how the chip in your laptop got there. A designer at a fables semiconductor company probably designed it and then sent it to TSMC. TSMC manufactured the chip based on those designs and sent it to the packaging company, which packaged the chip and sent it to the systems guy, who put it all together and sent it to its final destination, the end unit provider.

Now companies are finding that they need to re-integrate at the leading edge. Some fabless companies have said that in order to get the packaging they want, they need to invest in packaging startups.

That disintegration/reintegration dynamic raises a question: who across these companies has ownership, with all the rewards and liabilities that word implies? If packaging becomes more important and plays a bigger role in chip design and innovation, it will needs to address, particularly for medical applications, issues of heat, reliability, and safety.

The packaging industry as a whole sees about $20 billion in revenue each year. Contrast that with Intel alone, which pulls down $40 to $50 billion a year. Additionally, chipmakers on average pump almost 20 percent of their income back into R&D. Contrast that again with R&D spending by packaging companies. ASE--the biggest packaging behemoth, which brings in about $3.5 billion a year--is the record holder among its cohort for how much it spends on research and development: 3.2 percent. 20 percent of $40 billion is a lot, and that’s probably why Intel is going to be the first to ship 32-nm processors. 3.2 percent of $3.5 billion? Well, it’s not enough for any kind of risky, out-of-the-box innovations. The industry is just going along to get along.

Who can blame them? Why should they absorb the risks that will plague any kind of innovation in packaging? Innovation in packaging also implies liability. Just look at what happened to Apple last year when Nvidia famously screwed up its GeForce GPUs. Apple had to replace the faulty chips for free. The problem was traced to a packaging defect.

Microsoft had to write down its first Xbox chips because of packaging issues that led to the infamous "red rings of death"-- to the tune of $1 billion.

And if you’re still not convinced, think about the potential liability in medical implants.

Right now, no one is in a position to be responsible for innovation in packaging, but innovation is sorely needed. Someone needs to step up and give this poor girl a makeover.



MIT Forum on Human Spaceflight Highlights Challenges and Potential Solutions

Updated Wednesday, December 16, 2009: Video of forum posted below.

NASA has a problem. The U.S. space agency is trying to build new rockets and new spaceships, but it’s not clear where they should go (see our Special Report on "Why Mars? Why Now?" for detailed coverage). This summer’s review of U.S. human spaceflight plans, a ten-person commission headed by former Lockheed Martin chairman and CEO Norman Augustine, came up with several options in a report released in October (failure was not one of them).

In Friday’s forum on the implications of that report, held at MIT, Augustine and fellow committee member Edward Crawley, an MIT professor of engineering and aeronautics and astronautics, shared the floor with space policy historians John Logsdon and Asif Siddiqi to discuss what was in the report, what it means, and the next steps toward a political decision.

Key points emphasized in the panel:

--Mars must be the goal, but it must not be the first destination. It’s too hard, too expensive, and we need to learn to live and work in space first.

--NASA has to get out of the crew and cargo business, and focus instead on what to do beyond low-earth orbit; let the commercial sector take over hauling folks and stuff to orbit.

--The NASA administrator must have authority to turn plans into action, i.e. to be “the CEO of NASA,” instead of having his hands tied by Congress. (Currently, Congress won’t allow the administrator to reduce NASA’s workforce or close any facilities, since they bring revenue and jobs to several states. But that makes it harder to run the agency efficiently.)

--It’s a multiplayer space game now, not just the U.S. and Russia. Future space exploration missions will have to take into account burgeoning space programs in China and India, in addition to already-active programs in Japan, Canada and Europe.

--The president needs to actually make a decision, and then commit to it. Let’s either have a viable human spaceflight program, with enough resources to make it valuable, or let’s have the courage to end it now, Augustine said, rather than letting the program struggle along half-heartedly.

The Augustine report makes clear that the current “Constellation” program, which was put in place to answer the Bush administration’s 2004 challenge to return to the moon and Mars following the space shuttle’s retirement in 2010—but which was not funded accordingly—is unsustainable at its current funding level.

So the three decisions the president will have to make, Crawley said, are the degree to which the U.S. should embrace the international community; the destination (i.e. the moon, Mars, a variety of near-Earth objects like asteroids, or moons of Mars and Mars fly-arounds); and the budget.

Though the committee members were careful not to endorse any particular options, given that their charter was merely to state choices, not to recommend any, it’s possible to read between the lines. A flexible path is the most logical, and Crawley’s explanation had me convinced. It provides for intermediate steps, multiple new accomplishments, and continued exploration while simultaneously building equipment for future landings on the moon and Mars. It also provides the chance to have many new “firsts” along the way, instead of waiting thirty years for the first person to land on Mars. (Here’s another good analysis of the flexible option by longtime space expert and former NASA engineer James Oberg.)

Unfortunately, the logical path is also the one that will result in benefits ten or so years down the road, Crawley suggested, which means it’s unlikely as a political decision.

Regardless, the decision is coming soon. Recommendations prepared by NASA, the Office of Science and Technology Policy, and the Office of Management and Budget are even now making their way to the president’s desk for a decision, according to John Logsdon. Though President Obama will have to make the choice by the end of December to influence the FY 2011 budget, Logsdon predicts that the announcement won’t come until later, as part of a state of the union address or in a separate speech. It is likely that any such announcement will include an invitation to international partners to join in crafting the future of spaceflight, Logsdon said in the MIT forum.

Logsdon also expects that the president himself will make the decision, rather than leaving it to his science advisor, national security council, or budget people. “And the president will want to take credit for it,” Logsdon added.

In the meantime, an agreement reached earlier this week between the House and Senate on a 2010 appropriations bill would ensure that the current Constellation program won’t be terminated without official approval while the agency awaits Obama’s verdict.

The bill would provide U.S. $3.8 billion to continue human spaceflight operations through 2010 in the absence of a decision from the administration. If voted in, the bill would also require any changes in the program’s direction to be approved by later acts of Congress.

The discussion at MIT was thoughtful and intelligent, and it showed that the Augustine committee really did get a lot of analysis done in a very short time. But no one can predict the president’s decision. As Logsdon concluded, “I guess the final word is: stay tuned.”

Watch a video of the forum below (about two hours - moderator David Mindell starts speaking 5 minutes in, panel introduced starting at 12:20, comments from the panel begin at 17:50) or catch it at MIT Aero-Astro news. 

Video: Massachusetts Institute of Technology

American Sign Language Goes Mobile

In March we covered research at Cornell University that's bringing sign language to mobile devices--which the deaf community has traditionally only been able to use for text messaging.

Now the Cornell team, led by electrical and computer engineering professor Sheila Hemani, has developed prototype devices and is testing them with about 25 American Sign Language (ASL) speakers in the Seattle area.

From the press release:

“We completely take cell phones for granted,” said Sheila Hemami... “Deaf people can text, but if texting were so fabulous, cell phones would never develop. There is a reason that we like to use our cell phones. People prefer to talk.” The technology, Hemami continued, is about much more than convenience. It allows deaf people “untethered communication in their native language” – exactly the same connectivity available to hearing people, she said.

The challenge was to make the phone's video process enough frames per second for test subjects to have conversations in real-time, despite low bandwidth, while not draining the phone's battery.

Now the researchers are working to make the phones "as user friendly as possible," while reducing the cost of integrating mobile ASL software into the devices.

Darpa, the Government Agency With the Best Imagination, Sends Competitors on a Search for Red Balloons

Who doesn’t love Darpa, the Defense Advanced Research Projects Agency? After all, we have Darpa to thank for the Internet.

In recent years, Darpa stands out as the agency that knows how to capture the imagination of scientists, engineers, and the general public. Take the Grand Challenges that sent autonomous vehicles across the desert. These were hugely difficult tasks that sent engineers all over the country scrambling night and day; the effort was big, as was the prize (US$ 2 million).

But even when the prize is small (just $40,000), it seems that Darpa knows how to go for the emotional punch, the vivid symbol that captures the imagination.

Because tomorrow, Saturday, 5 December, Darpa staff members will be tying up 10 red balloons at unannounced locations around the U.S. Each balloon will be visible from a public roadway. The challenge—be the first to identify the latitude and longitude of each balloon. You can work alone; you can work in teams. You can use any technical tool you can think of; you can simply cruise around looking for them. The balloons will only be visible on Saturday, but you’ll have up to nine days afterwards to submit your entry.

The contest is officially called the Network Challenge; the organizers assume that you won’t be able to win without the massive use of computer communications tools and social networks. And that’s what Darpa is trying to figure out—just how these things are used for collaboration today. It is also holding the event to commemorate the 40th anniversary of the day the first four nodes of the Arpanet—the predecessor to the Internet—were connected.

It’s a very cool task; sort of Rick Smolan, originator of the Day in the Life series of photography projects, meets Christo, the artist behind huge and colorful public installations.

But I think the real evidence of imaginative genius is the choice of the red balloon as the object to identify. There’s something magical about a balloon, a lonely balloon, tied in the middle of nowhere—French filmmaker Albert Lamorisse best captured that magic in the classic French movie, Le Ballon Rouge, but any child who spends a day with a balloon tied to his wrist or stroller understands that magic. Kudos to the folks at Darpa for understanding it too. And good luck to the competitors.

By the way, registration is open until the contest begins.

Photo: Darpa

What Does Real-Time Search Mean to Google?

I’m doing research on real-time search to learn what all the fuss is about, so I called up Google. Why not? They’re anxious to get “real-time” search into their results, and I wanted to know what the term means to them. Turns out they’re making a “cool” announcement about it Monday, and won’t talk to me till after that.

Much of the buzz about real-time search points to scouring Twitter feeds and Facebook status updates for the most current information on the web. But that’s pretty much the opposite of how traditional Google search works—which is based on producing results according to authority rankings established over time. So how will this real-time information get roped in with (or distinguished from) more traditional content searches, and how will the relevance of results be decided?

Forgetting the mechanics for a minute, let’s get to semantics. What does “real-time” even mean? Does it mean finding info that’s just been published, because it just happened? Or rather finding websites that have just been updated, but maybe the events they describe happened awhile ago? Is it just for Tweets and the like, or will it be more comprehensive?

Danny Sullivan’s Search-Engine Land has a lengthy post from a few months ago (definitely not real time, but still worth reading) that explores this topic and also compares search results from several smaller fish like Collecta and Scoopler. Sullivan argues that the term “real-time” should refer to information that’s posted immediately, as it happens—basically, Tweets and status updates, or what he calls “microblogging” (the comments section of the post provides nice counterpoint arguments). As for news and even blogs, Sullivan suggests, that content’s already history by the time it goes live on the web.

To get itself up to speed, Google announced in October that it’s including Twitter updates in its search results. And yesterday, TechCrunch posted that Google has integrated Twitter functionality into its Friend Connect application (its equivalent of Facebook Connect), which lets people log in to various websites using their Google account information. Now, any site that has Friend Connect enabled will allow you to log in with your Twitter account, too, which will automatically link your Twitter profile and let you tweet right from the site. Looks like Google and Twitter are getting pretty cozy.

So what’s next? Will Google be getting access to Twitter’s Firehose Feed, which would allow the company to search all Tweets as they happen and index them? I was assured that the upcoming announcement would answer all my questions, so maybe we’ll find out more about how Google plans to walk the line between what’s hot now, and what’s the most relevant answer to my current query.

I hope it also gives us a hint of Google’s planned audience for real-time search. Who is this kind of search useful for, and what’s the value added? Is it just for people looking to learn what’s going on, faster than they can read a newspaper, log in to a website, or even scan a blog post or RSS feed? What about someone doing research on the history of a vacation destination? Will real-time search trump regular old Google search?

We’ll try to post as fast as possible after Monday’s big announcement.

Tunnels Under Gaza

Photo credit: Ibraheem Abu Mustafa / Reuters

In this month's issue of IEEE Spectrum, I describe Gaza's jury-rigged power grid, and the challenges involved in providing electricity in a territory essentially cut off from the world. But ultimately, when the grid proves unreliable, people take matters into their own hands.

Just as war and the economic blockade have forced those managing the electricity grid to cope with the seemingly impossible task of providing steady power, ordinary Gazans have had to learn to improvise when possible. When electricity faltered during the war, residents exchanged tips on creative workarounds. A favorite is harvesting the low-level current that runs through the telephone lines to recharge cellphones.

But residents also rely on consumer goods—candles, batteries and generators—smuggled through the tunnels that connect southern Gaza to Egypt. Driving into a bombed-out section of Rafah, near the Egyptian border, I got a chance to see what, in the absence of a properly working grid, has by default become this option of last resort.

Because smuggling, in large part, relies on the other side turning a blind eye to the problem, smuggling anything through Israel would be impossible, and the consumer-goods tunnels all connect to Egypt. A large number of the tunnels had originated in houses (tunnels are typically owned by families), now smashed to rubble, but many tunnels are already open again, covered only in some cases by an impromptu shed to shield them from Israeli aircraft and drones that might spot them from above. We stopped to go down into the shafts of two tunnels—one for fuel and one for cement. (Tunnels are often designated for specific cargo. For instance, bigger tunnels are reserved for cement and other goods and are reinforced with wood; fuel tunnels can in some cases be smaller.) The tunnel owner asked that we photograph quickly; if Hamas security caught us, they would demand money.

During the 2008–2009 war, Israel bombed the tunnels, collapsing many of them, but a large number of the underground passageways remain, and new ones are constantly being dug. Smuggling operations quickly resumed.

From the Palestinian perspective, the tunnels act as a slow-release pressure valve, allowing just enough goods into the territory to prevent a catastrophe, but not enough to allow the economy to function normally. While Israel points to the illegal tunnels as another sign of Gazan—and particularly Hamas—dysfunction, Hamas naturally blames Israel. “If [the borders] are open, there are no tunnels,” Fawzy Barhoom, Hamas’s spokesman, says in an interview in his Gaza office.

In the meantime, the tunnels don’t help the power plant, says Rafiq Maliha, a plant manager, with a wry laugh. The idea of a sophisticated power plant using contraband fuel or parts is just unrealistic. “We are talking about a power plant,” he said. “We can’t smuggle [parts] through the tunnels.”

Hiding from the Internet

These days, engineers are working hard to make our network connections faster and more reliable. In this issue, David Schneider reports that communications satellite company Iridium is upping its maximum data rate fourfold, and Clark T-C. Nguyen suggests that mechanical components may be the way to make future wireless devices better. Information is indeed coming at us faster and in more places. But that’s not all good. In October, author Nathan Zeldes talked about the importance of getting control of this flood that threatens productivity and creativity.

For me, the only way to do get such control is to cut off my network connections, all of them, for three weeks every year.

I’ve taken this annual three-week Internet sabbatical for about as long as I can remember. (I didn’t used to call it that; I used to call it vacation.) It acts as a reset button. It reminds me that looking out into the distance instead of at the computer in front of me makes a real difference in how my eyes feel; that getting my hands off the keyboard and moving around regularly is good for my body; I sit differently, stand taller. It reminds me that having time to let my mind drift, instead of being yanked from demand to demand, can lead me to new ideas.

And when I go back to the keyboard, the computer screen, the Internet, I handle it better. I force myself not to check email quite so compulsively, to take the occasional stretch break. For a while, anyway.

This year, my Internet sabbatical took me to a lake in New England and a beach in New Jersey. We’ve gone to both places for years; I can remember when the New Jersey rental didn’t have television—too remote for broadcast TV. I liked it back when we didn’t own cell phones; the rental didn’t have a landline installed, so I told potential visitors not to bother to call ahead, just come on down and look for me on the beach. Now both rentals had hundreds of television channels, thanks to satellite TV, and my cell phone means I’m never out of touch, even when I’d like to be.

But I thought I could still escape from the Internet. I just didn’t know how hard it would be to hide.
In New England this year, we rented a cabin with nine other family members and friends. I didn’t bring any devices that could get online, but among the 14 of us staying there (six of them children), we’d packed in two full-size laptops, four netbooks, and an iPhone. Good luck with that, I thought, not expecting any access beyond dial-up; hope you downloaded plenty of movies ahead of time.

It turned out I was the one surprised—the cabin had both WiFi and wired Ethernet. So folks were on their computers constantly—watching silly YouTube videos, checking email, looking at weather reports and sports scores, googling this and that and everything. Instead of the lapping of the water on the little beach, I heard the beeps of games and music from the latest viral video. And, even worse, they kept calling me to come over to the screen, as if my life would not be complete were I not to see a certain clip of extreme sailing or last night’s Daily Show.

I resolved that I would not touch a keyboard for the entire vacation; wouldn’t look at a screen if I could possibly avoid it. I was worried that once I touched a computer I’d quickly surf over to my email and would get sucked into responding and bang, I’d never be able to break away again.

It got a little tense one afternoon when my husband’s computer wouldn’t log on to the wireless. Since I’m tech support in the family, he asked me to try to fix it. “I’m not touching the computer,” I said, “but I think the thing to do would be to go to the Apple menu, open system preferences, look at your Internet connection settings, and switch them to manual and then back to automatic, which will force it to reacquire the DHCP or something like that. Anyway, it should fix it.” (I could feel my little brain cells click into non-vacation mode, and I wasn’t liking it.)
He held the computer out to me. “It’d only take you a minute to fix it.” I kept my arms at my sides, and a cousin jumped in to help him. (Thank you, Jeff.)

I figured this torture would only last a week, because their was no way our landlord in New Jersey was going to have hooked our shabby but much-loved beach rental up to the Internet. I was right about that, what I didn’t count on was new community wifi—slow, but it worked in most of the rooms of the house. I was not pleased. I like listening to the ocean from the screened in porch, and find the street noise and occasional group house party sounds only occasionally irritating—but having someone out there tap tapping on the keyboard was sure to drive me nuts. (And I couldn’t keep them off the porch; the signal was strongest there.)

The temptation to get on line grew. My oldest son was not traveling with us, instead, he was with a school group performing at the Edinburgh Fringe Festival. And he hadn’t been calling. However, other folks on the trip were posting pictures online, so we could Internet stalk him to see if he was OK. I caved and looked at the photos twice while my husband paged through them—I did stick to my resolution not touch the keyboard. I realized afterwards that I would have been fine waiting until I got home, the photos didn’t tell me anything other than he had indeed made it to Scotland.

Like TV did, and phone did, the Internet is invading my vacation space. And pretty soon there’s going to be no place to hide.

Photo: John Lund/Paula Zacharias/Getty Images

Screen, Paper, and Information Overload

The following is a guest post by Nathan Zeldes, author of IEEE Spectrum's "How to Beat Information Overload

After I published my Spectrum article on Infoglut, I received many responses through a variety of online channels. That's one of the perks in being an active cyber-citizen: you get to meet interesting people. However, a number of respondents focused on the same complaint: the article, they claimed, was far too lengthy, thereby itself contributing to Information Overload.

This statement puzzled me, because I wrote the article in close cooperation with Spectrum’s capable editors, and in conformance with their expectations. My curiosity piqued, I pulled out some back copies of the magazine from my shelf and counted the pages. Turns out that my article is 3.5 printed pages long, well within the norm for Spectrum feature articles. In fact I recently blogged about the shortening of articles in printed magazines over the decades, using Scientific American as an example; that magazine’s features went down from 12 content pages per article in the sixties to 8 pages in 2009. My 3.5 pager would count as positively brief in Sci Am even today... So what is going on?

What made it even weirder is that looking at the article in the printed magazine, it did not appear particularly long; but even I had to admit that the online version, which, it turns out, is the version the complaining readers saw, does seem to go on and on...

Thinking it over, I realized that there is a key difference between reading online and on paper. The information density on paper is certainly greater: lots more can fit—and be consumed—on a printed page than can be put on a screen of the same physical dimensions. The text of my article fills seven vertical “screenfuls” on my 22-inch monitor; it fits on 2.5 double-page spreads of Spectrum, even though each such spread is a bit smaller than the monitor’s area. This means that I could take in the printed article in three “gulps”, versus seven online. Even more important, the printed version is random access; I can move my eyes across the pages and home in on what I want, taking my cues from the layout, section header typography, and images. The long-scrolling online version is serial access: you need to scroll patiently through uniform-looking text blocks to find anything, and your best strategy would be to read it start to finish serially to see what’s in it. That’s why we use books, and not the scrolls the ancients had: flipping through pages is much more efficient if you plan to skim.

This came as a surprise to me: I'm used to considering online reading as a valid alternative to print (until it comes to snuggling up in bed with a good novel, at any rate). Online has its advantages—searchability being a major one—but in an age of impatience, Gutenberg’s good ol’ system has its benefits as well!

Nathan Zeldes blogs on Information Overload here.

NORAD Ready To Track Santa Claus

The North American Aerospace Defense Command (NORAD) announced today it is ready to track Santa Claus's coordinates as soon as the red-helmeted old man takes off from the North Pole. No, don't worry, Santa will not be riding an ICBM-powered sleigh. This is just an annual public relations program by NORAD.

On 24 December, you (or your kids) can go to the NORAD Tracks Santa web site to see Santa's location in real time. You can also call 1-877-HI-NORAD (1-877-446-6723), send an email to, or check Facebook, Twitter, Flickr, YouTube, Google Maps, and Google Earth (look for @noradsanta).

If you think this is all very silly, it's probably because you don't have a pre-teen bombarding (no pun intended) you with questions about Saint Nick. If you did, you'd thank NORAD for this initiative, which is actually really cute (check out their FAQ describing Santa's sleigh technical specs and whether fighter jets might intercept him) and began with an even cuter little tale. In 1955, a Sears store in Colorado Springs ran newspaper ads telling children to call Santa on a special phone number. But the number had a typo. When the first kid phoned in on Christmas Eve, on the other end of the line was Colonel Harry Shoup at the Continental Air Defense Command (CONAD), NORAD's predecessor, in Colorado Springs.

After more calls came in, Colonel Shoup realized one should tread lightly when it comes to children and their patron saint. So he dutifully ordered his operators to find Santa's exact location and reported it back to every child who called that night. Ah, the Christmas spirit!

Below, NORAD's video of Santa's 2008 journey, captured with its Santa cams.


The Cat Brain Cliff Notes

Two leading scientists are embroiled in a controversy about a cat brain simulation. At first blush, the topic might seem silly. But the stakes are higher than the tired cat fight jokes would lead you to think. This argument has larger implications for the future of AI research, and particularly for a field called computational neuroscience. The controversy has called into question not only the legitimacy of one researcher's work, but of all brain simulation work. I think it's important to untangle the assumptions and accusations in clear, non-specialized language.


Last week at the 2009 Supercomputing conference, IBM researchers on DARPA's SyNAPSE project presented a paper about a simulation that they said approached the scale of a cat's in terms of the number of neurons and synapses involved. They won the Gordon Bell prize for their efforts. A few days later, neuroscientist Henry Markram sent an open letter refuting Modha's claims and calling his paper a hoax.

First, let's understand the stakes and who's playing. Computational neuroscience—or cognitive computing, or biometaphorical computing—is an emerging field that is new enough that its luminaries have not yet agreed on a solid name for it. The goal (very simplified) is to build a brain.

But you can't create a brain out of meat any more than you can make it out of popsicle sticks, so you're stuck with computer simulations. That's not something either neuroscientists or computer scientists can do alone. It takes a village of electrical engineers, materials scientists, neuroscientists, biologists, computer scientists, nanotechnology researchers, and more. All these specialists bring different talents and assumptions. As such, the field is a fairly good embodiment of the Tower of Babel story, or the old story of the blind men feeling the elephant and arguing about whether an elephant is a foot or a trunk.

Markram v Modha

Consider Dharmendra Modha and Henry Markram. One is a neuroscientist, the other is a computer scientist. "The conflict arises since both species are needed to simulate (computer scientist) a brain (neuroscientist)," says Massimiliano Versace, who works on the SyNAPSE project at the Department of Cognitive and Neural Systems at Boston University. "You need both."

Markram is a brilliant neuroscientist. He discovered spike timing dependent plasticity (STDP). For those of you whose heads just exploded, that means essentially he figured out how neurons learn. STDP is what makes you, a human, able to understand causality: you see a flash of lightning, hear a crack of thunder, watch a tree fall on your house, and you are able to say, "Aw, crap. Now I need to call my insurance company." Markram figured out how our neurons fire together and wire together. He's kind of a big deal.

Modha is also kind of a big deal. He figured out a new adaptive replacement cache algorithm. While I have no cute example to explain this, I will say that IBM was so thrilled about Modha's work (and the patent it yielded) that they gave him an office at their Almaden research center and carte blanche. Modha decided that he wanted to build a brain. The next year, Almaden's annual conference was on cognitive computing. DARPA soon came knocking.

But the point is this: "It's safe to say that Modha is as brilliant a computer scientist as Markram is a neuroscientist," Versace says.

SyNAPSE v Blue Brain

Their respective projects are as different as the two researchers. Markram's Blue Brain project at Ecole Polytechnique Federale de Lausanne, is a simulation running on a (slightly older, smaller) IBM Blue Gene. With his model, Markram is investigating the roots of neurological disorders like Alzheimer's disease, autism and depression. Markram says he is building the entire brain from the fundamental ground up, to illuminate how the summed activity of 100 billion neurons can result in consciousness. "Our whole sense of consciousness and memory exists because of the way our brains are wired," he says. He wants to engineer it. He wants a machine that will talk back to him someday, a machine that will be conscious inside the confines of its supercomputer. "That's the secret about Blue Brain," he says. "It will search the Internet by itself; it will search to become more real. Like a Pinocchio."

Modha is working on SyNAPSE, a project that couldn't be more different. With SyNAPSE, DARPA wants to create electronics that take a page out of the brain's book. The stated purpose is to "investigate innovative approaches that enable revolutionary advances in neuromorphic electronic devices that are scalable to biological levels. Everything you need to know about SyNAPSE is on the Neurdon blog, which is written by Boston University postdoc Versace and Ben Chandler, a PhD student in the department. BU is a subcontractor to HP and HRL, who compete with IBM on the DARPA project (this is the way most DARPA projects are run--three competitors make everyone work harder).

For an explanation of what makes brains so great, I went to Richard Granger (who is, again, kind of a big deal at the Dartmouth Brain Engineering Laboratory). "Brains somehow carry out hugely parallel algorithms, using very slow, sparsely connected, probabilistic elements, and they somehow scale well as they grow," he says. "Big-brained mammals exhibit capabilities far beyond small brains, and there evidently are not severely diminishing returns." Also, they use a very small amount of power.

If we could learn from the brain, what might we achieve? With circuits as reconfigurable, specialized, and fault tolerant as the brain, a new species of computer would be better at recognizing faces and driving a car than the world's most sophisticated supercomputers. Right now, Roadrunner, the supercomputer that comes closest to replicating a human's ability to drive in rush-hour traffic, weighs 227 metric tons and requires a diet of about 3 megawatts. By contrast, the brain regularly handles rush-hour driving on 20 watts (comparable to the power consumption of a Nintendo Wii), and its 1.5 kilograms fit neatly into your skull.

The upshot is this: Both Modha and Markram's projects are modeling the brain, but for different purposes, with different end results in mind, and so they focus on different aspects of brain modeling.

The Trouble With Models

But what both projects have in common is their goal of figuring out what makes the mammalian brain work the way it does. We are a long way off. As of 2009, no one has yet been able to simulate even the simplest, dumbest animal—a fruit fly, say—in the process of reacting to a drink of sugar water. "Fruit flies are actually really complicated," Chandler explains (see BU's CELEST project for more). "As far as I know, we haven't even created a synthetic aplysia or nematode yet."

Why is that? Jim Olds (who directs George Mason University's Krasnow Institute for Advanced Study, and who is a neuroscientist) explains that what neuroscience is sorely lacking is a unifying principle. "We need an Einstein of neuroscience," he says, "to lay out a fundamental theory of cognition the way Einstein came up with the theory of relativity." Here's what he means by that. What aspect of the brain is the most basic element that, when modeled, will result in cognition? Is it a faithful reproduction of the wiring diagram of the brain? Is it the exact ion channels in the neurons?

But it's a chicken and egg problem, because without a brain-specific theory of relativity, there is no way to identify a basic unit of neuroscience. And without a basic unit of neuroscience, you can't build a model. So, to get anywhere, you must abstract. "[Markram] thinks it's the neuron," says Olds. "But what if that's not what makes a brain work as a brain? What if it's the synaptic spine, or the ion channels?" There are even neuroscientists who think you can never model the brain—because in order to do it you'd have to model down to quantum effects.

No one knows whether, to understand consciousness, neuroscience must account for every synaptic detail. "We do not have a definition of consciousness," says Granger. "Or, worse, we have fifteen mutually incompatible definitions."

No Six for You

It should be pretty clear at this point that no one's going to be building a Caprica Six any time soon. Or a cat. Or a fruit fly. But that doesn't mean all attempts at modeling any aspect of the brain are useless, or to use Markram's more loaded language, a hoax.

"To understand the brain—or any complex system—it is unavoidable that it be viewed computationally," Granger says. Granger says we should treat brain circuits as, well, circuits. "Circuits on one hand are physical systems; on the other, they are carrying out algorithms. To understand them does entail extensive simulation; but also computational, algorithm-level analysis, i.e., the elucidation of principles from simulation and experiment. Not just "that" they work, but how and why they work." So, can we arrive at deep insights without building a few starting simulations first?

"We definitely want to build simulations, including large, intermediate, and small-scale models, at low, middle, and high levels," says Granger. "The field has indeed been doing so for many years, and much of what we currently understand (such as the examples given) grew from many such models. The field is making great strides, and work such as Henry's, and many others', are prominent examples."

Hoax or No Hoax?

And ultimately, that is where Modha's work is important. It's not that he's created a cat brain. Every journalist who interviewed him has heard him correct them in the following way: "No, no, it's not a cat brain. A cat-SCALE simulation."

Some would ask whether there is a point to having a cat-scale simulation if it does not model with any great fidelity an actual cat. I think you should ask the Gordon Bell prize people*. Because for computer science, this is indeed a big deal. Neurdon's Ben Chandler says, "I don't think [other neural simulation environments] can handle 147,000 processors or 144 terabytes of memory. Actually mapping a system that large, and getting only a .3 percent deviation in workloads across cores is pretty impressive."

"We all want to understand how brain circuits do what they do," Granger says, "in particular, how their computational powers (perception, recognition, memory retrieval) manage to outperform our extant engineering systems. Let's avoid this (vast) discussion for now, and jump to this: To achieve large-scale simulations that exhibit critical characteristics of brain circuits, we will very likely need to address four current roadblock issues: speed, power, scaling, and parallelism."

A teaser for the next post (which will address the first point in Markram's letter): Modha's work addresses all of these.

*And for those calling for the Gordon Bell people to rescind their prize, please recall that the Bell prize is "awarded each year to recognize outstanding achievement in high-performance computing. The purpose of the award is to track the progress over time of parallel computing, with particular emphasis on rewarding innovation in applying high-performance computing to applications in science. Prizes are awarded for peak performance as well as special achievements in scalability."




Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More