And so it continues. As we predicted in a podcast last month, Intel's antitrust worries aren't anywhere near over.
After officially investigating the chipmaking giant for over a year and a half, the U.S. Federal Trade Commission took the plunge and sued. That doesn't come as a total surprise, given last month's similar suit from the New York State attorney general's office, and a fine levied earlier this year from the European Commission, the EU's enforcement arm. Intel also recently paid off rival AMD, in return for that company dropping its lawsuit against Intel.
“Intel has engaged in a deliberate campaign to hamstring competitive threats to its monopoly,” said Richard A. Feinstein, Director of the FTC’s Bureau of Competition. “It’s been running roughshod over the principles of fair play and the laws protecting competition on the merits. The Commission’s action today seeks to remedy the damage that Intel has done to competition, innovation, and, ultimately, the American consumer.”
Intel fired back, saying the FTC would like to impose restrictions so harsh that "it would make it impossible for Intel to conduct business." Furthermore, the company claims,
The FTC's case is misguided. It is based largely on claims that the FTC added at the last minute and has not investigated. In addition, it is explicitly not based on existing law but is instead intended to make new rules for regulating business conduct. These new rules would harm consumers by reducing innovation and raising prices.
But the FTC isn't buying. According to its release:
To remedy the anticompetitive damage alleged in the complaint, the FTC is seeking an order which includes provisions that would prevent Intel from using threats, bundled prices, or other offers to encourage exclusive deals, hamper competition, or unfairly manipulate the prices of its CPU or GPU chips. The FTC also may seek an order prohibiting Intel from unreasonably excluding or inhibiting the sale of competitive CPUs or GPUs, and prohibiting Intel from making or distributing products that impair the performance–or apparent performance–of non-Intel CPUs or GPUs.
The time honored trope of teen movies is the mousy nobody who finally takes off her glasses and lets down her ponytail, and suddenly she's the prom queen. In the semiconductor industry's version of that movie, that girl's name is Packaging.
Packaging was the undercurrent of much of this year's International Electron Devices Meeting. No one could have put it better than Semiconductor Industry Association vice president Pushkar Apte, who stated that "packaging is the red-headed stepchild" of the industry. Until now, anyway. Two major forces are driving the attention back to packaging: Medical applications and the end of scaling.
As an example of the former, at IEDM, Purdue University researchers showed implantable wireless transponders that can monitor radiation doses received during cancer treatments. The miniature transponders would be implanted near a tumor during radiation therapy. The part is a prototype, as far as I understand, and the Purdue researchers are working with the radiation oncology department at the University of Texas Southwestern Medical Center. There, doctors can give them an idea of what's needed in terms of packaging. But what happens when a part like this transitions from prototype to off-the-shelf? It's going to need innovative packaging. That's what.
The second driver is the ever-impending end of Moore's law. It's no secret that engineers are running out of options with transistor scaling. The industry is nominally at the 32nm process—which means Intel is about to start shipping microprocessors with 32-nm feature sizes. No one else is. (Intel will soon release their 32-nm processor, called Westmere.)
But other chipmakers are struggling to keep up with that roadmap. AMD only released its first 45-nm processors this past January. According to EETimes, "a period of more than two years is now expected between the introduction of AMD's 32nm technology and the previous 45nm node first seen in late 2008.”
TSMC is also lagging behind Intel but ahead of AMD with 32-nm process technology, which it expects to have ready in 2010. (For more on where everyone stands with 32-nm process technology, read this exhaustively researched EE Times piece.)
Why is it so hard to scale? Researchers agree that the industry has hit a brick wall because scaling transistors to ever-tinier dimensions causes reliability to fall steeply. Researchers who didn't want to go on the record told me, and at a short course on Sunday, attendees repeatedly expressed frustration at the difficulties of further scaling.
3D integration looks like a viable alternative for chipmakers who don't want to bang their heads against Moore's law in the quest for 22-nm process technology. 3D integration boils down to this: stack 'em vertically instead of squeezing more and tinier transistors on a planar surface. It means that with a fixed transistor and die size, you can still add processors and memory. Johns Hopkins University electrical engineering professor Andreas Andreou estimated that by the time the industry arrives at 22-nm process technology, it would be more effective to stack four 22-nm chips than press on to the 11-nm node."The gold rush of shrinking will be replaced by 3D," he predicted.
Even Nvidia is on the 3D bandwagon: John Chen said in his keynote presentation that graphics processors can’t make progress unless they go 3D. Two IEDM sessions were devoted entirely to advanced 3D technology and processing for memory and logic. In one session chaired by researchers from IBM and Samsung, CEA-LETI researchers threw down the gauntlet: For the first time, they said, 3D CMOS integration can be considered a viable alternative to sub-22nm technology nodes. TSMC researchers positioned 3D integration as healthy competition for the 28-nm node. IMEC, Fujitsu, and ST Micro presented their research into making 3D work.
Researchers are divided on the severity of the issues that plague 3D integration: heat, alignment, and metal contamination still remain, but according to Hopkins professor Andreou and NEC researcher Yoshihiro Hayashi, heat is a red herring: any number of innovations will easily solve the heat problem by the time 3D packaged wafers are ready to hit the shelves (among these, using through-silicon vias to transport the excess heat to the heat sink, but that’s a whole other story).
But we're not at the prom yet. (We’re still watching the part of the movie where the best friend realizes that our girl Packaging needs a haircut and a full face of makeup.)
You’ll note that most of the problems researchers described are about packaging. Many ingredients in 3D stacks rely on innovations in packaging to make them viable. To solve the heat problem, for example, researchers are assuming that new ways of diverting excess heat to the heat sink will be developed. But who's going to figure that out? Are through-silicon vias part of the chip or part of the packaging? What about those heat sinks?
3D chips require new kinds of packaging. And new kinds of packaging require innovation. And that, at last, is the crux of the problem: innovations in packaging? Whose problem is that?
The semiconductor industry has disintegrated, over the past decades, into many horizontal layers. Consider how the chip in your laptop got there. A designer at a fables semiconductor company probably designed it and then sent it to TSMC. TSMC manufactured the chip based on those designs and sent it to the packaging company, which packaged the chip and sent it to the systems guy, who put it all together and sent it to its final destination, the end unit provider.
Now companies are finding that they need to re-integrate at the leading edge. Some fabless companies have said that in order to get the packaging they want, they need to invest in packaging startups.
That disintegration/reintegration dynamic raises a question: who across these companies has ownership, with all the rewards and liabilities that word implies? If packaging becomes more important and plays a bigger role in chip design and innovation, it will needs to address, particularly for medical applications, issues of heat, reliability, and safety.
The packaging industry as a whole sees about $20 billion in revenue each year. Contrast that with Intel alone, which pulls down $40 to $50 billion a year. Additionally, chipmakers on average pump almost 20 percent of their income back into R&D. Contrast that again with R&D spending by packaging companies. ASE--the biggest packaging behemoth, which brings in about $3.5 billion a year--is the record holder among its cohort for how much it spends on research and development: 3.2 percent. 20 percent of $40 billion is a lot, and that’s probably why Intel is going to be the first to ship 32-nm processors. 3.2 percent of $3.5 billion? Well, it’s not enough for any kind of risky, out-of-the-box innovations. The industry is just going along to get along.
Who can blame them? Why should they absorb the risks that will plague any kind of innovation in packaging? Innovation in packaging also implies liability. Just look at what happened to Apple last year when Nvidia famously screwed up its GeForce GPUs. Apple had to replace the faulty chips for free. The problem was traced to a packaging defect.
Microsoft had to write down its first Xbox chips because of packaging issues that led to the infamous "red rings of death"-- to the tune of $1 billion.
And if you’re still not convinced, think about the potential liability in medical implants.
Right now, no one is in a position to be responsible for innovation in packaging, but innovation is sorely needed. Someone needs to step up and give this poor girl a makeover.
Updated Wednesday, December 16, 2009: Video of forum posted below.
NASA has a problem. The U.S. space agency is trying to build new rockets and new spaceships, but it’s not clear where they should go (see our Special Report on "Why Mars? Why Now?" for detailed coverage). This summer’s review of U.S. human spaceflight plans, a ten-person commission headed by former Lockheed Martin chairman and CEO Norman Augustine, came up with several options in a report released in October (failure was not one of them).
In Friday’s forum on the implications of that report, held at MIT, Augustine and fellow committee member Edward Crawley, an MIT professor of engineering and aeronautics and astronautics, shared the floor with space policy historians John Logsdon and Asif Siddiqi to discuss what was in the report, what it means, and the next steps toward a political decision.
Key points emphasized in the panel:
--Mars must be the goal, but it must not be the first destination. It’s too hard, too expensive, and we need to learn to live and work in space first.
--NASA has to get out of the crew and cargo business, and focus instead on what to do beyond low-earth orbit; let the commercial sector take over hauling folks and stuff to orbit.
--The NASA administrator must have authority to turn plans into action, i.e. to be “the CEO of NASA,” instead of having his hands tied by Congress. (Currently, Congress won’t allow the administrator to reduce NASA’s workforce or close any facilities, since they bring revenue and jobs to several states. But that makes it harder to run the agency efficiently.)
--It’s a multiplayer space game now, not just the U.S. and Russia. Future space exploration missions will have to take into account burgeoning space programs in China and India, in addition to already-active programs in Japan, Canada and Europe.
--The president needs to actually make a decision, and then commit to it. Let’s either have a viable human spaceflight program, with enough resources to make it valuable, or let’s have the courage to end it now, Augustine said, rather than letting the program struggle along half-heartedly.
The Augustine report makes clear that the current “Constellation” program, which was put in place to answer the Bush administration’s 2004 challenge to return to the moon and Mars following the space shuttle’s retirement in 2010—but which was not funded accordingly—is unsustainable at its current funding level.
So the three decisions the president will have to make, Crawley said, are the degree to which the U.S. should embrace the international community; the destination (i.e. the moon, Mars, a variety of near-Earth objects like asteroids, or moons of Mars and Mars fly-arounds); and the budget.
Though the committee members were careful not to endorse any particular options, given that their charter was merely to state choices, not to recommend any, it’s possible to read between the lines. A flexible path is the most logical, and Crawley’s explanation had me convinced. It provides for intermediate steps, multiple new accomplishments, and continued exploration while simultaneously building equipment for future landings on the moon and Mars. It also provides the chance to have many new “firsts” along the way, instead of waiting thirty years for the first person to land on Mars. (Here’s another good analysis of the flexible option by longtime space expert and former NASA engineer James Oberg.)
Unfortunately, the logical path is also the one that will result in benefits ten or so years down the road, Crawley suggested, which means it’s unlikely as a political decision.
Regardless, the decision is coming soon. Recommendations prepared by NASA, the Office of Science and Technology Policy, and the Office of Management and Budget are even now making their way to the president’s desk for a decision, according to John Logsdon. Though President Obama will have to make the choice by the end of December to influence the FY 2011 budget, Logsdon predicts that the announcement won’t come until later, as part of a state of the union address or in a separate speech. It is likely that any such announcement will include an invitation to international partners to join in crafting the future of spaceflight, Logsdon said in the MIT forum.
Logsdon also expects that the president himself will make the decision, rather than leaving it to his science advisor, national security council, or budget people. “And the president will want to take credit for it,” Logsdon added.
In the meantime, an agreement reached earlier this week between the House and Senate on a 2010 appropriations bill would ensure that the current Constellation program won’t be terminated without official approval while the agency awaits Obama’s verdict.
The bill would provide U.S. $3.8 billion to continue human spaceflight operations through 2010 in the absence of a decision from the administration. If voted in, the bill would also require any changes in the program’s direction to be approved by later acts of Congress.
The discussion at MIT was thoughtful and intelligent, and it showed that the Augustine committee really did get a lot of analysis done in a very short time. But no one can predict the president’s decision. As Logsdon concluded, “I guess the final word is: stay tuned.”
Watch a video of the forum below (about two hours - moderator David Mindell starts speaking 5 minutes in, panel introduced starting at 12:20, comments from the panel begin at 17:50) or catch it at MIT Aero-Astro news.
Now the Cornell team, led by electrical and computer engineering professor Sheila Hemani, has developed prototype devices and is testing them with about 25 American Sign Language (ASL) speakers in the Seattle area.
“We completely take cell phones for granted,” said Sheila Hemami... “Deaf people can text, but if texting were so fabulous, cell phones would never develop. There is a reason that we like to use our cell phones. People prefer to talk.” The technology, Hemami continued, is about much more than convenience. It allows deaf people “untethered communication in their native language” – exactly the same connectivity available to hearing people, she said.
The challenge was to make the phone's video process enough frames per second for test subjects to have conversations in real-time, despite low bandwidth, while not draining the phone's battery.
Now the researchers are working to make the phones "as user friendly as possible," while reducing the cost of integrating mobile ASL software into the devices.
Who doesn’t love Darpa, the Defense Advanced Research Projects Agency? After all, we have Darpa to thank for the Internet.
In recent years, Darpa stands out as the agency that knows how to capture the imagination of scientists, engineers, and the general public. Take the Grand Challenges that sent autonomous vehicles across the desert. These were hugely difficult tasks that sent engineers all over the country scrambling night and day; the effort was big, as was the prize (US$ 2 million).
But even when the prize is small (just $40,000), it seems that Darpa knows how to go for the emotional punch, the vivid symbol that captures the imagination.
Because tomorrow, Saturday, 5 December, Darpa staff members will be tying up 10 red balloons at unannounced locations around the U.S. Each balloon will be visible from a public roadway. The challenge—be the first to identify the latitude and longitude of each balloon. You can work alone; you can work in teams. You can use any technical tool you can think of; you can simply cruise around looking for them. The balloons will only be visible on Saturday, but you’ll have up to nine days afterwards to submit your entry.
The contest is officially called the Network Challenge; the organizers assume that you won’t be able to win without the massive use of computer communications tools and social networks. And that’s what Darpa is trying to figure out—just how these things are used for collaboration today. It is also holding the event to commemorate the 40th anniversary of the day the first four nodes of the Arpanet—the predecessor to the Internet—were connected.
It’s a very cool task; sort of Rick Smolan, originator of the Day in the Life series of photography projects, meets Christo, the artist behind huge and colorful public installations.
But I think the real evidence of imaginative genius is the choice of the red balloon as the object to identify. There’s something magical about a balloon, a lonely balloon, tied in the middle of nowhere—French filmmaker Albert Lamorisse best captured that magic in the classic French movie, Le Ballon Rouge, but any child who spends a day with a balloon tied to his wrist or stroller understands that magic. Kudos to the folks at Darpa for understanding it too. And good luck to the competitors.
I’m doing research on real-time search to learn what all the fuss is about, so I called up Google. Why not? They’re anxious to get “real-time” search into their results, and I wanted to know what the term means to them. Turns out they’re making a “cool” announcement about it Monday, and won’t talk to me till after that.
Much of the buzz about real-time search points to scouring Twitter feeds and Facebook status updates for the most current information on the web. But that’s pretty much the opposite of how traditional Google search works—which is based on producing results according to authority rankings established over time. So how will this real-time information get roped in with (or distinguished from) more traditional content searches, and how will the relevance of results be decided?
Forgetting the mechanics for a minute, let’s get to semantics. What does “real-time” even mean? Does it mean finding info that’s just been published, because it just happened? Or rather finding websites that have just been updated, but maybe the events they describe happened awhile ago? Is it just for Tweets and the like, or will it be more comprehensive?
Danny Sullivan’s Search-Engine Land has a lengthy post from a few months ago (definitely not real time, but still worth reading) that explores this topic and also compares search results from several smaller fish like Collecta and Scoopler. Sullivan argues that the term “real-time” should refer to information that’s posted immediately, as it happens—basically, Tweets and status updates, or what he calls “microblogging” (the comments section of the post provides nice counterpoint arguments). As for news and even blogs, Sullivan suggests, that content’s already history by the time it goes live on the web.
So what’s next? Will Google be getting access to Twitter’s Firehose Feed, which would allow the company to search all Tweets as they happen and index them? I was assured that the upcoming announcement would answer all my questions, so maybe we’ll find out more about how Google plans to walk the line between what’s hot now, and what’s the most relevant answer to my current query.
I hope it also gives us a hint of Google’s planned audience for real-time search. Who is this kind of search useful for, and what’s the value added? Is it just for people looking to learn what’s going on, faster than they can read a newspaper, log in to a website, or even scan a blog post or RSS feed? What about someone doing research on the history of a vacation destination? Will real-time search trump regular old Google search?
We’ll try to post as fast as possible after Monday’s big announcement.
In this month's issue of IEEE Spectrum, I describe Gaza's jury-rigged power grid, and the challenges involved in providing electricity in a territory essentially cut off from the world. But ultimately, when the grid proves unreliable, people take matters into their own hands.
Just as war and the economic blockade have forced those managing the electricity grid to cope with the seemingly impossible task of providing steady power, ordinary Gazans have had to learn to improvise when possible. When electricity faltered during the war, residents exchanged tips on creative workarounds. A favorite is harvesting the low-level current that runs through the telephone lines to recharge cellphones.
But residents also rely on consumer goods—candles, batteries and generators—smuggled through the tunnels that connect southern Gaza to Egypt. Driving into a bombed-out section of Rafah, near the Egyptian border, I got a chance to see what, in the absence of a properly working grid, has by default become this option of last resort.
Because smuggling, in large part, relies on the other side turning a blind eye to the problem, smuggling anything through Israel would be impossible, and the consumer-goods tunnels all connect to Egypt. A large number of the tunnels had originated in houses (tunnels are typically owned by families), now smashed to rubble, but many tunnels are already open again, covered only in some cases by an impromptu shed to shield them from Israeli aircraft and drones that might spot them from above. We stopped to go down into the shafts of two tunnels—one for fuel and one for cement. (Tunnels are often designated for specific cargo. For instance, bigger tunnels are reserved for cement and other goods and are reinforced with wood; fuel tunnels can in some cases be smaller.) The tunnel owner asked that we photograph quickly; if Hamas security caught us, they would demand money.
During the 2008–2009 war, Israel bombed the tunnels, collapsing many of them, but a large number of the underground passageways remain, and new ones are constantly being dug. Smuggling operations quickly resumed.
From the Palestinian perspective, the tunnels act as a slow-release pressure valve, allowing just enough goods into the territory to prevent a catastrophe, but not enough to allow the economy to function normally. While Israel points to the illegal tunnels as another sign of Gazan—and particularly Hamas—dysfunction, Hamas naturally blames Israel. “If [the borders] are open, there are no tunnels,” Fawzy Barhoom, Hamas’s spokesman, says in an interview in his Gaza office.
In the meantime, the tunnels don’t help the power plant, says Rafiq Maliha, a plant manager, with a wry laugh. The idea of a sophisticated power plant using contraband fuel or parts is just unrealistic. “We are talking about a power plant,” he said. “We can’t smuggle [parts] through the tunnels.”
These days, engineers are working hard to make our network connections faster and more reliable. In this issue, David Schneider reports that communications satellite company Iridium is upping its maximum data rate fourfold, and Clark T-C. Nguyen suggests that mechanical components may be the way to make future wireless devices better. Information is indeed coming at us faster and in more places. But that’s not all good. In October, author Nathan Zeldes talked about the importance of getting control of this flood that threatens productivity and creativity.
For me, the only way to do get such control is to cut off my network connections, all of them, for three weeks every year.
I’ve taken this annual three-week Internet sabbatical for about as long as I can remember. (I didn’t used to call it that; I used to call it vacation.) It acts as a reset button. It reminds me that looking out into the distance instead of at the computer in front of me makes a real difference in how my eyes feel; that getting my hands off the keyboard and moving around regularly is good for my body; I sit differently, stand taller. It reminds me that having time to let my mind drift, instead of being yanked from demand to demand, can lead me to new ideas.
And when I go back to the keyboard, the computer screen, the Internet, I handle it better. I force myself not to check email quite so compulsively, to take the occasional stretch break. For a while, anyway.
This year, my Internet sabbatical took me to a lake in New England and a beach in New Jersey. We’ve gone to both places for years; I can remember when the New Jersey rental didn’t have television—too remote for broadcast TV. I liked it back when we didn’t own cell phones; the rental didn’t have a landline installed, so I told potential visitors not to bother to call ahead, just come on down and look for me on the beach. Now both rentals had hundreds of television channels, thanks to satellite TV, and my cell phone means I’m never out of touch, even when I’d like to be.
But I thought I could still escape from the Internet. I just didn’t know how hard it would be to hide.
In New England this year, we rented a cabin with nine other family members and friends. I didn’t bring any devices that could get online, but among the 14 of us staying there (six of them children), we’d packed in two full-size laptops, four netbooks, and an iPhone. Good luck with that, I thought, not expecting any access beyond dial-up; hope you downloaded plenty of movies ahead of time.
It turned out I was the one surprised—the cabin had both WiFi and wired Ethernet. So folks were on their computers constantly—watching silly YouTube videos, checking email, looking at weather reports and sports scores, googling this and that and everything. Instead of the lapping of the water on the little beach, I heard the beeps of games and music from the latest viral video. And, even worse, they kept calling me to come over to the screen, as if my life would not be complete were I not to see a certain clip of extreme sailing or last night’s Daily Show.
I resolved that I would not touch a keyboard for the entire vacation; wouldn’t look at a screen if I could possibly avoid it. I was worried that once I touched a computer I’d quickly surf over to my email and would get sucked into responding and bang, I’d never be able to break away again.
It got a little tense one afternoon when my husband’s computer wouldn’t log on to the wireless. Since I’m tech support in the family, he asked me to try to fix it. “I’m not touching the computer,” I said, “but I think the thing to do would be to go to the Apple menu, open system preferences, look at your Internet connection settings, and switch them to manual and then back to automatic, which will force it to reacquire the DHCP or something like that. Anyway, it should fix it.” (I could feel my little brain cells click into non-vacation mode, and I wasn’t liking it.)
He held the computer out to me. “It’d only take you a minute to fix it.” I kept my arms at my sides, and a cousin jumped in to help him. (Thank you, Jeff.)
I figured this torture would only last a week, because their was no way our landlord in New Jersey was going to have hooked our shabby but much-loved beach rental up to the Internet. I was right about that, what I didn’t count on was new community wifi—slow, but it worked in most of the rooms of the house. I was not pleased. I like listening to the ocean from the screened in porch, and find the street noise and occasional group house party sounds only occasionally irritating—but having someone out there tap tapping on the keyboard was sure to drive me nuts. (And I couldn’t keep them off the porch; the signal was strongest there.)
The temptation to get on line grew. My oldest son was not traveling with us, instead, he was with a school group performing at the Edinburgh Fringe Festival. And he hadn’t been calling. However, other folks on the trip were posting pictures online, so we could Internet stalk him to see if he was OK. I caved and looked at the photos twice while my husband paged through them—I did stick to my resolution not touch the keyboard. I realized afterwards that I would have been fine waiting until I got home, the photos didn’t tell me anything other than he had indeed made it to Scotland.
Like TV did, and phone did, the Internet is invading my vacation space. And pretty soon there’s going to be no place to hide.
After I published my Spectrum article on Infoglut, I received many responses through a variety of online channels. That's one of the perks in being an active cyber-citizen: you get to meet interesting people. However, a number of respondents focused on the same complaint: the article, they claimed, was far too lengthy, thereby itself contributing to Information Overload.
This statement puzzled me, because I wrote the article in close cooperation with Spectrum’s capable editors, and in conformance with their expectations. My curiosity piqued, I pulled out some back copies of the magazine from my shelf and counted the pages. Turns out that my article is 3.5 printed pages long, well within the norm for Spectrum feature articles. In fact I recently blogged about the shortening of articles in printed magazines over the decades, using Scientific American as an example; that magazine’s features went down from 12 content pages per article in the sixties to 8 pages in 2009. My 3.5 pager would count as positively brief in Sci Am even today... So what is going on?
What made it even weirder is that looking at the article in the printed magazine, it did not appear particularly long; but even I had to admit that the online version, which, it turns out, is the version the complaining readers saw, does seem to go on and on...
Thinking it over, I realized that there is a key difference between reading online and on paper. The information density on paper is certainly greater: lots more can fit—and be consumed—on a printed page than can be put on a screen of the same physical dimensions. The text of my article fills seven vertical “screenfuls” on my 22-inch monitor; it fits on 2.5 double-page spreads of Spectrum, even though each such spread is a bit smaller than the monitor’s area. This means that I could take in the printed article in three “gulps”, versus seven online. Even more important, the printed version is random access; I can move my eyes across the pages and home in on what I want, taking my cues from the layout, section header typography, and images. The long-scrolling online version is serial access: you need to scroll patiently through uniform-looking text blocks to find anything, and your best strategy would be to read it start to finish serially to see what’s in it. That’s why we use books, and not the scrolls the ancients had: flipping through pages is much more efficient if you plan to skim.
This came as a surprise to me: I'm used to considering online reading as a valid alternative to print (until it comes to snuggling up in bed with a good novel, at any rate). Online has its advantages—searchability being a major one—but in an age of impatience, Gutenberg’s good ol’ system has its benefits as well!
The North American Aerospace Defense Command (NORAD) announced today it is ready to track Santa Claus's coordinates as soon as the red-helmeted old man takes off from the North Pole. No, don't worry, Santa will not be riding an ICBM-powered sleigh. This is just an annual public relations program by NORAD.
On 24 December, you (or your kids) can go to the NORAD Tracks Santa web site to see Santa's location in real time. You can also call 1-877-HI-NORAD (1-877-446-6723), send an email to email@example.com, or check Facebook, Twitter, Flickr, YouTube, Google Maps, and Google Earth (look for @noradsanta).
If you think this is all very silly, it's probably because you don't have a pre-teen bombarding (no pun intended) you with questions about Saint Nick. If you did, you'd thank NORAD for this initiative, which is actually really cute (check out their FAQ describing Santa's sleigh technical specs and whether fighter jets might intercept him) and began with an even cuterlittletale. In 1955, a Sears store in Colorado Springs ran newspaper ads telling children to call Santa on a special phone number. But the number had a typo. When the first kid phoned in on Christmas Eve, on the other end of the line was Colonel Harry Shoup at the Continental Air Defense Command (CONAD), NORAD's predecessor, in Colorado Springs.
After more calls came in, Colonel Shoup realized one should tread lightly when it comes to children and their patron saint. So he dutifully ordered his operators to find Santa's exact location and reported it back to every child who called that night. Ah, the Christmas spirit!
Below, NORAD's video of Santa's 2008 journey, captured with its Santa cams.
IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.