Tech Talk iconTech Talk

CES 2010: Livin' the 3D Life

CES has barely started, but it has already come into focus. I can't tell you exactly what the show is about yet, but I can tell you its format--3D.

Today was a day of big press conferences by big companies and each one highlighted 3D. Toshiba, Samsung, Panasonic, even Monster Cable--they all touted their new 3D products. I'm writing this instead of going to Sony's 90-minute press conference (twice as long as anyone else's; my colleague Tekla Perry is there) but I guarantee that more than a few of those minutes will be devoted to new 3D televisions and surely 3D Blu-Ray players (and who knows what other 3D products as well; Panasonic showed a 3D videocamera).

According to Jeffrey Katzenberg, 4 of the top 10 box office successes of the 2009 movie year were 3D, even though there were only 10 3D feature movies made in the entire year (out of 170 total movies released), including his own Monsters vs Aliens. Yes, that Jeffrey Katzenberg--the CEO of Dreamworks Animation. He was on hand at the Samsung press conference (along with the CEO of Technicolor, Fred Rose) to express his admiration for Samsung's new line of 3D-capable televisions ("works of art") and his happiness at being part of "the Samsung family."

Samsung was not the only maker to pull big-name partners out of the wings. Panasonic trotted out recorded versions of the heads of DirectTV and Skype and, in person, film producer Jon Landau, whose move Avatar has already, he mentioned in passing, broken the $1 billion mark. According to Box Office Mojo, two-thirds of that is outside the U.S.) DirectTV announced that beginning tomorrow it will be broadcasting 3D to its customers, with new set-top boxes and software to accommodate it.

Toshiba made 3D the centerpiece of its press conference. More accurately, the company's major announcement was about several new lines of televisions that incorporate its version of the Cell microprocessor, and what made the Cell especially important was the ways it renders 3D better than regular processors. (Spectrum gave the Cell a thumbs up all the way back in January of 2006, in "Winner: Multimedia Monster"; more recently we made gentle fun of Toshiba for building it into a top-of-the-line laptop, the Qosmio, though we did include it in our 2008 Holiday Gift guide)

Even Monster Cable followed form, both in terms celebrity endorsers--flanked by the son, daughter, and nephew of the late Miles Davis, company president Noel Lee announced a special Miles Davis  headphone-and-CD-and-DVD package in honor of the 50th anniversary of "Kind of Blue"--and in terms of 3D. Monster doesn't have an 3D-specific products, but Lee got one of today's biggest laughs when he announced, in a booming voice, "I LOVE 3D! Not because I love 3D, but because it needs faster cables!" Monster's HDMI now goes up to a crazy 15.8 Gb/s with today's announcements.

There's no limit to 3D mania. Tomorrow I have an appointment to talk with a company making a $100 pair of 3D active glasses that have a high-end microprocessor built right into them.

Only Sharp stood outside, not even a little wet from the wave of 3D products announced here. Instead, it showed off something that seemed even more innovative or at least more immediately useful, a new collection of televisions that use what it calls QuadPixel. In short, it has added a fourth, yellow subpixel--instead of RGB, its sets are now RGBY. It had seven different QuadPixel televisions on stage, and I have to say, the colors looked stunning. And no special glasses were needed.

 

 

President-elect to the Bioethical Society highlights ethics of Deep Brain Stimulation

The American Society for Bioethics has chosen Dr. Joseph Fins as its president for the 2011 term, a choice that will likely bring needed attention to the ethical issues surrounding the implementation of deep brain stimulators.

For much of his career at Weill Cornell Medical college, Fins has done what bio-ethecists do—fretted, advocated, encouraged, and debated about the proper treatment of patients. He's shown particular concern for people who recieve deep brain stimulation while in a state of vegetation and has written numerous papers about the potential benefits of this kind of therapy. 

Patients who have been unresponsive for over about a year have traditionally been viewed as hopeless cases and are not typically considered as candidates for DBS. A 2007 paper by Fins in Nature challenged this dogma when it revealed therapeutic benefits for a patient who recieved DBS after having been in a minimally concious state for over six years. The work not only broadens the therapeutic spectrum for this kind of patient, but also complicates our standard diagram of conciousness, which we tend to view as a discrete state. The mind of a vegetative patient begins to look more like a banked fire scattered with embers and the potential to at least partially reignite.

But then comes the ethical part. With a device as new and sucessful and complicated as the deep brain stimulator, great care needs to be taken to ensure that the individuals who go under the knife and allow surgeons to wire up their thought boxes are treated more as patients than as research subjects. Fins has used many panel discussions and opinion articles to discuss the on going care of DBS patients, and urge surgeons to make the long term commitment to patients who have already been mined for data.

One of the main problems that Fins talks about, beyond a lack of long-term initiative, is the need to spread basic neurosurgical information beyond specialists. When a patient goes home to a small community after recieving a deep brain stimulator, it is not acceptable for them to have to return to a specialized facility every time they need a minor tune up or a new battery. Fins likens the situation to the relative ease of maintaining a heart pacemaker, and says we can do better.

Lest people like Fins come across as downers, it's important to point out that his suggestions serve not only to caution people in the field of DBS. Doing things like developing long-term relationships with DBS patients will ultimately benefit the technology as we gather complete records of its perormance. Hopefully will is part of his agenda as presdent next year.

Boeing's Plastic Plane Takes Off

boeing 787 dreamliner

Back in 2006, I wrote: "Sometime next year, the first Boeing 787 Dreamliner will rise into the skies above Seattle. The takeoff will probably be like any other, but the plane sure won't."

Well, the "next year" part was a little off. After more than two years of delays, the first flight of the 787 happened yesterday.

Now, as for the plane itself, it's very special indeed. From my 2006 Spectrum article, "Carbon Takeoff":

The midsize, wide-body 787--whose overall design Boeing finalized just a few months ago--is the first commercial jet to have fuselage and wings made almost entirely of advanced, plasticlike materials known as composites. Composites are mixtures of resins and high-strength fibers of carbon, boron, graphite, or glass. They are generally lighter, stronger, and more resistant to fatigue and corrosion than the aluminum alloys widely used in planes today. In the 787, Boeing is using mostly carbon-fiber composites, which in smaller quantities are found in items such as high-end bicycle frames and the fenders of expensive sports cars.

The aerospace industry has sought to use more and more composites instead of metal to create more agile and fuel-efficient aircraft. These new materials have been going into military planes for decades, and in recent commercial aircraft they account for 10 to 25 percent of the total weight; they are used in small fuselage components, tails, and select portions of the wings, such as trailing-edge flaps.

But fully half of Boeing's 250-seat Dreamliner will be composites. The company says that thanks to the new materials, an improved aerodynamic design, and better engines and onboard systems, the 787 will burn 20 percent less fuel than comparable jetliners and have maintenance costs 10 percent lower.

The use of composites is just part of the story. The other big innovation was the way Boing had the new plane designed. The company recruited collaborators from Japan, Italy, and Australia to help not only fabricate but also design the 787. To design the carbon wings Boing partnered with three "heavies" of Japanese industry: Mitsubishi Heavy Industries, Kawasaki Heavy Industries, and Fuji Heavy Industries. Again from my story:

[Boeing wing design team leader Mark Jenks] says that for past planes, Boeing followed a "build to print" model: the partners would come in only at a later stage and would basically fabricate the parts according to Boeing's specifications. But for the 787, the partners were brought in about four and a half years earlier, which gave them enough time to participate in the early design work and provide input based on their manufacturing expertise. The scheme also let Boeing spread the risk, because the company's partners picked up a sizable portion of the development costs in exchange for a bigger share of the profits that will accrue if the plane is successful.

Having more hands in the drawing board had its advantages but it surely created problems as well. The Seattle Times reported early this month:

Then, starting in 2007, supply-chain problems led to a series of morale-sapping delays costing billions of dollars, and the Dreamliner program shuddered to a standstill.

As the long-delayed first flight finally arrives, any sense of triumph has been dampened by repeated snarls in building the first few jets, compounded this year by a design flaw discovered at the wing-body joint.

Aviation Week had more details on the recent wing problem:

After five earlier delays that put the program about two years behind schedule, Boeing expected the 787 to fly last June. But a last-minute analysis of stresses placed on stringers in its wing as the wing bent to simulated flight loads did not conform to predictions in Boeing's computer analysis. The weakness showed as delamination at the ends of the stringers. They are rib-like strengtheners that run the length of the wing. They are connected to the fuselage at the side-of-body join and it was in that region that they failed to meet specifications. There are 17 for each wing and all were strengthened.

In the end, Boeing's decision to focus on a midsize, fuel-efficient plane -- in contrast to the gigantic A380 of archrival Airbus -- proved a prescient, and profitable, choice. Boeing says it has received 840 orders so far, which makes the 787 the fastest-selling new commercial jetliner ever.

The image we ran in my 2006 story was an artist's rendering; it showed a computer-generated 787 gliding against blue sky and fluffy clouds. Yesterday was a gray rainy day in Everett, Wash., from where the plane took off. But at least now the images are real.

boeing dreamliner 787

Photos: Boeing

Federal Trade Commission Sues Intel, and Intel Fires Back

And so it continues. As we predicted in a podcast last month, Intel's antitrust worries aren't anywhere near over.

After officially investigating the chipmaking giant for over a year and a half, the U.S. Federal Trade Commission took the plunge and sued. That doesn't come as a total surprise, given last month's similar suit from the New York State attorney general's office, and a fine levied earlier this year from the European Commission, the EU's enforcement arm. Intel also recently paid off rival AMD, in return for that company dropping its lawsuit against Intel.

From the FTC press release:

“Intel has engaged in a deliberate campaign to hamstring competitive threats to its monopoly,” said Richard A. Feinstein, Director of the FTC’s Bureau of Competition. “It’s been running roughshod over the principles of fair play and the laws protecting competition on the merits. The Commission’s action today seeks to remedy the damage that Intel has done to competition, innovation, and, ultimately, the American consumer.”

Intel fired back, saying the FTC would like to impose restrictions so harsh that "it would make it impossible for Intel to conduct business." Furthermore, the company claims,

The FTC's case is misguided. It is based largely on claims that the FTC added at the last minute and has not investigated. In addition, it is explicitly not based on existing law but is instead intended to make new rules for regulating business conduct. These new rules would harm consumers by reducing innovation and raising prices.

But the FTC isn't buying. According to its release:

To remedy the anticompetitive damage alleged in the complaint, the FTC is seeking an order which includes provisions that would prevent Intel from using threats, bundled prices, or other offers to encourage exclusive deals, hamper competition, or unfairly manipulate the prices of its CPU or GPU chips. The FTC also may seek an order prohibiting Intel from unreasonably excluding or inhibiting the sale of competitive CPUs or GPUs, and prohibiting Intel from making or distributing products that impair the performance–or apparent performance–of non-Intel CPUs or GPUs.

This could indicate that the Obama justice department is indeed coming down harder on antitrust cases than the previous administration.

null

Packaging: The Red-Headed Stepchild of the Semiconductor Industry

The time honored trope of teen movies is the mousy nobody who finally takes off her glasses and lets down her ponytail, and suddenly she's the prom queen. In the semiconductor industry's version of that movie, that girl's name is Packaging.

Packaging was the undercurrent of much of this year's International Electron Devices Meeting. No one could have put it better than Semiconductor Industry Association vice president Pushkar Apte, who stated that "packaging is the red-headed stepchild" of the industry. Until now, anyway. Two major forces are driving the attention back to packaging: Medical applications and the end of scaling.

As an example of the former, at IEDM, Purdue University researchers showed implantable wireless transponders that can monitor radiation doses received during cancer treatments. The miniature transponders would be implanted near a tumor during radiation therapy. The part is a prototype, as far as I understand, and the Purdue researchers are working with the radiation oncology department at the University of Texas Southwestern Medical Center. There, doctors can give them an idea of what's needed in terms of packaging. But what happens when a part like this transitions from prototype to off-the-shelf? It's going to need innovative packaging. That's what.

The second driver is the ever-impending end of Moore's law. It's no secret that engineers are running out of options with transistor scaling. The industry is nominally at the 32nm process—which means Intel is about to start shipping microprocessors with 32-nm feature sizes. No one else is. (Intel will soon release their 32-nm processor, called Westmere.)

But other chipmakers are struggling to keep up with that roadmap. AMD only released its first 45-nm processors this past January. According to EETimes, "a period of more than two years is now expected between the introduction of AMD's 32nm technology and the previous 45nm node first seen in late 2008.”

TSMC is also lagging behind Intel but ahead of AMD with 32-nm process technology, which it expects to have ready in 2010. (For more on where everyone stands with 32-nm process technology, read this exhaustively researched EE Times piece.)

Why is it so hard to scale? Researchers agree that the industry has hit a brick wall because scaling transistors to ever-tinier dimensions causes reliability to fall steeply. Researchers who didn't want to go on the record told me, and at a short course on Sunday, attendees repeatedly expressed frustration at the difficulties of further scaling.

3D integration looks like a viable alternative for chipmakers who don't want to bang their heads against Moore's law in the quest for 22-nm process technology. 3D integration boils down to this: stack 'em vertically instead of squeezing more and tinier transistors on a planar surface. It means that with a fixed transistor and die size, you can still add processors and memory. Johns Hopkins University electrical engineering professor Andreas Andreou estimated that by the time the industry arrives at 22-nm process technology, it would be more effective to stack four 22-nm chips than press on to the 11-nm node."The gold rush of shrinking will be replaced by 3D," he predicted.

Even Nvidia is on the 3D bandwagon: John Chen said in his keynote presentation that graphics processors can’t make progress unless they go 3D. Two IEDM sessions were devoted entirely to advanced 3D technology and processing for memory and logic. In one session chaired by researchers from IBM and Samsung, CEA-LETI researchers threw down the gauntlet: For the first time, they said, 3D CMOS integration can be considered a viable alternative to sub-22nm technology nodes. TSMC researchers positioned 3D integration as healthy competition for the 28-nm node. IMEC, Fujitsu, and ST Micro presented their research into making 3D work.

Researchers are divided on the severity of the issues that plague 3D integration: heat, alignment, and metal contamination still remain, but according to Hopkins professor Andreou and NEC researcher Yoshihiro Hayashi, heat is a red herring: any number of innovations will easily solve the heat problem by the time 3D packaged wafers are ready to hit the shelves (among these, using through-silicon vias to transport the excess heat to the heat sink, but that’s a whole other story).

In any case, the general assumption is that you can work around the Moore's law limitations by doing other things, like 3D integration. At the very least, 3D chip integration might buy the industry a little time so that researchers can get their ducks in a row with promising technologies like extreme ultraviolet lithography, multigate transistors, and 2nd-gen high-k metal gate technology.

But we're not at the prom yet. (We’re still watching the part of the movie where the best friend realizes that our girl Packaging needs a haircut and a full face of makeup.)

You’ll note that most of the problems researchers described are about packaging. Many ingredients in 3D stacks rely on innovations in packaging to make them viable. To solve the heat problem, for example, researchers are assuming that new ways of diverting excess heat to the heat sink will be developed. But who's going to figure that out? Are through-silicon vias part of the chip or part of the packaging? What about those heat sinks?

3D chips require new kinds of packaging. And new kinds of packaging require innovation. And that, at last, is the crux of the problem: innovations in packaging? Whose problem is that?

The semiconductor industry has disintegrated, over the past decades, into many horizontal layers. Consider how the chip in your laptop got there. A designer at a fables semiconductor company probably designed it and then sent it to TSMC. TSMC manufactured the chip based on those designs and sent it to the packaging company, which packaged the chip and sent it to the systems guy, who put it all together and sent it to its final destination, the end unit provider.

Now companies are finding that they need to re-integrate at the leading edge. Some fabless companies have said that in order to get the packaging they want, they need to invest in packaging startups.

That disintegration/reintegration dynamic raises a question: who across these companies has ownership, with all the rewards and liabilities that word implies? If packaging becomes more important and plays a bigger role in chip design and innovation, it will needs to address, particularly for medical applications, issues of heat, reliability, and safety.

The packaging industry as a whole sees about $20 billion in revenue each year. Contrast that with Intel alone, which pulls down $40 to $50 billion a year. Additionally, chipmakers on average pump almost 20 percent of their income back into R&D. Contrast that again with R&D spending by packaging companies. ASE--the biggest packaging behemoth, which brings in about $3.5 billion a year--is the record holder among its cohort for how much it spends on research and development: 3.2 percent. 20 percent of $40 billion is a lot, and that’s probably why Intel is going to be the first to ship 32-nm processors. 3.2 percent of $3.5 billion? Well, it’s not enough for any kind of risky, out-of-the-box innovations. The industry is just going along to get along.

Who can blame them? Why should they absorb the risks that will plague any kind of innovation in packaging? Innovation in packaging also implies liability. Just look at what happened to Apple last year when Nvidia famously screwed up its GeForce GPUs. Apple had to replace the faulty chips for free. The problem was traced to a packaging defect.

Microsoft had to write down its first Xbox chips because of packaging issues that led to the infamous "red rings of death"-- to the tune of $1 billion.

And if you’re still not convinced, think about the potential liability in medical implants.

Right now, no one is in a position to be responsible for innovation in packaging, but innovation is sorely needed. Someone needs to step up and give this poor girl a makeover.




 

 

MIT Forum on Human Spaceflight Highlights Challenges and Potential Solutions

Updated Wednesday, December 16, 2009: Video of forum posted below.

NASA has a problem. The U.S. space agency is trying to build new rockets and new spaceships, but it’s not clear where they should go (see our Special Report on "Why Mars? Why Now?" for detailed coverage). This summer’s review of U.S. human spaceflight plans, a ten-person commission headed by former Lockheed Martin chairman and CEO Norman Augustine, came up with several options in a report released in October (failure was not one of them).

In Friday’s forum on the implications of that report, held at MIT, Augustine and fellow committee member Edward Crawley, an MIT professor of engineering and aeronautics and astronautics, shared the floor with space policy historians John Logsdon and Asif Siddiqi to discuss what was in the report, what it means, and the next steps toward a political decision.

Key points emphasized in the panel:

--Mars must be the goal, but it must not be the first destination. It’s too hard, too expensive, and we need to learn to live and work in space first.

--NASA has to get out of the crew and cargo business, and focus instead on what to do beyond low-earth orbit; let the commercial sector take over hauling folks and stuff to orbit.

--The NASA administrator must have authority to turn plans into action, i.e. to be “the CEO of NASA,” instead of having his hands tied by Congress. (Currently, Congress won’t allow the administrator to reduce NASA’s workforce or close any facilities, since they bring revenue and jobs to several states. But that makes it harder to run the agency efficiently.)

--It’s a multiplayer space game now, not just the U.S. and Russia. Future space exploration missions will have to take into account burgeoning space programs in China and India, in addition to already-active programs in Japan, Canada and Europe.

--The president needs to actually make a decision, and then commit to it. Let’s either have a viable human spaceflight program, with enough resources to make it valuable, or let’s have the courage to end it now, Augustine said, rather than letting the program struggle along half-heartedly.

The Augustine report makes clear that the current “Constellation” program, which was put in place to answer the Bush administration’s 2004 challenge to return to the moon and Mars following the space shuttle’s retirement in 2010—but which was not funded accordingly—is unsustainable at its current funding level.

So the three decisions the president will have to make, Crawley said, are the degree to which the U.S. should embrace the international community; the destination (i.e. the moon, Mars, a variety of near-Earth objects like asteroids, or moons of Mars and Mars fly-arounds); and the budget.

Though the committee members were careful not to endorse any particular options, given that their charter was merely to state choices, not to recommend any, it’s possible to read between the lines. A flexible path is the most logical, and Crawley’s explanation had me convinced. It provides for intermediate steps, multiple new accomplishments, and continued exploration while simultaneously building equipment for future landings on the moon and Mars. It also provides the chance to have many new “firsts” along the way, instead of waiting thirty years for the first person to land on Mars. (Here’s another good analysis of the flexible option by longtime space expert and former NASA engineer James Oberg.)

Unfortunately, the logical path is also the one that will result in benefits ten or so years down the road, Crawley suggested, which means it’s unlikely as a political decision.

Regardless, the decision is coming soon. Recommendations prepared by NASA, the Office of Science and Technology Policy, and the Office of Management and Budget are even now making their way to the president’s desk for a decision, according to John Logsdon. Though President Obama will have to make the choice by the end of December to influence the FY 2011 budget, Logsdon predicts that the announcement won’t come until later, as part of a state of the union address or in a separate speech. It is likely that any such announcement will include an invitation to international partners to join in crafting the future of spaceflight, Logsdon said in the MIT forum.

Logsdon also expects that the president himself will make the decision, rather than leaving it to his science advisor, national security council, or budget people. “And the president will want to take credit for it,” Logsdon added.

In the meantime, an agreement reached earlier this week between the House and Senate on a 2010 appropriations bill would ensure that the current Constellation program won’t be terminated without official approval while the agency awaits Obama’s verdict.

The bill would provide U.S. $3.8 billion to continue human spaceflight operations through 2010 in the absence of a decision from the administration. If voted in, the bill would also require any changes in the program’s direction to be approved by later acts of Congress.

The discussion at MIT was thoughtful and intelligent, and it showed that the Augustine committee really did get a lot of analysis done in a very short time. But no one can predict the president’s decision. As Logsdon concluded, “I guess the final word is: stay tuned.”

Watch a video of the forum below (about two hours - moderator David Mindell starts speaking 5 minutes in, panel introduced starting at 12:20, comments from the panel begin at 17:50) or catch it at MIT Aero-Astro news. 

Video: Massachusetts Institute of Technology

American Sign Language Goes Mobile

In March we covered research at Cornell University that's bringing sign language to mobile devices--which the deaf community has traditionally only been able to use for text messaging.

Now the Cornell team, led by electrical and computer engineering professor Sheila Hemani, has developed prototype devices and is testing them with about 25 American Sign Language (ASL) speakers in the Seattle area.

From the press release:

“We completely take cell phones for granted,” said Sheila Hemami... “Deaf people can text, but if texting were so fabulous, cell phones would never develop. There is a reason that we like to use our cell phones. People prefer to talk.” The technology, Hemami continued, is about much more than convenience. It allows deaf people “untethered communication in their native language” – exactly the same connectivity available to hearing people, she said.

The challenge was to make the phone's video process enough frames per second for test subjects to have conversations in real-time, despite low bandwidth, while not draining the phone's battery.

Now the researchers are working to make the phones "as user friendly as possible," while reducing the cost of integrating mobile ASL software into the devices.

Darpa, the Government Agency With the Best Imagination, Sends Competitors on a Search for Red Balloons

Who doesn’t love Darpa, the Defense Advanced Research Projects Agency? After all, we have Darpa to thank for the Internet.

In recent years, Darpa stands out as the agency that knows how to capture the imagination of scientists, engineers, and the general public. Take the Grand Challenges that sent autonomous vehicles across the desert. These were hugely difficult tasks that sent engineers all over the country scrambling night and day; the effort was big, as was the prize (US$ 2 million).

But even when the prize is small (just $40,000), it seems that Darpa knows how to go for the emotional punch, the vivid symbol that captures the imagination.

Because tomorrow, Saturday, 5 December, Darpa staff members will be tying up 10 red balloons at unannounced locations around the U.S. Each balloon will be visible from a public roadway. The challenge—be the first to identify the latitude and longitude of each balloon. You can work alone; you can work in teams. You can use any technical tool you can think of; you can simply cruise around looking for them. The balloons will only be visible on Saturday, but you’ll have up to nine days afterwards to submit your entry.

The contest is officially called the Network Challenge; the organizers assume that you won’t be able to win without the massive use of computer communications tools and social networks. And that’s what Darpa is trying to figure out—just how these things are used for collaboration today. It is also holding the event to commemorate the 40th anniversary of the day the first four nodes of the Arpanet—the predecessor to the Internet—were connected.

It’s a very cool task; sort of Rick Smolan, originator of the Day in the Life series of photography projects, meets Christo, the artist behind huge and colorful public installations.

But I think the real evidence of imaginative genius is the choice of the red balloon as the object to identify. There’s something magical about a balloon, a lonely balloon, tied in the middle of nowhere—French filmmaker Albert Lamorisse best captured that magic in the classic French movie, Le Ballon Rouge, but any child who spends a day with a balloon tied to his wrist or stroller understands that magic. Kudos to the folks at Darpa for understanding it too. And good luck to the competitors.

By the way, registration is open until the contest begins.

Photo: Darpa

What Does Real-Time Search Mean to Google?

I’m doing research on real-time search to learn what all the fuss is about, so I called up Google. Why not? They’re anxious to get “real-time” search into their results, and I wanted to know what the term means to them. Turns out they’re making a “cool” announcement about it Monday, and won’t talk to me till after that.

Much of the buzz about real-time search points to scouring Twitter feeds and Facebook status updates for the most current information on the web. But that’s pretty much the opposite of how traditional Google search works—which is based on producing results according to authority rankings established over time. So how will this real-time information get roped in with (or distinguished from) more traditional content searches, and how will the relevance of results be decided?

Forgetting the mechanics for a minute, let’s get to semantics. What does “real-time” even mean? Does it mean finding info that’s just been published, because it just happened? Or rather finding websites that have just been updated, but maybe the events they describe happened awhile ago? Is it just for Tweets and the like, or will it be more comprehensive?

Danny Sullivan’s Search-Engine Land has a lengthy post from a few months ago (definitely not real time, but still worth reading) that explores this topic and also compares search results from several smaller fish like Collecta and Scoopler. Sullivan argues that the term “real-time” should refer to information that’s posted immediately, as it happens—basically, Tweets and status updates, or what he calls “microblogging” (the comments section of the post provides nice counterpoint arguments). As for news and even blogs, Sullivan suggests, that content’s already history by the time it goes live on the web.

To get itself up to speed, Google announced in October that it’s including Twitter updates in its search results. And yesterday, TechCrunch posted that Google has integrated Twitter functionality into its Friend Connect application (its equivalent of Facebook Connect), which lets people log in to various websites using their Google account information. Now, any site that has Friend Connect enabled will allow you to log in with your Twitter account, too, which will automatically link your Twitter profile and let you tweet right from the site. Looks like Google and Twitter are getting pretty cozy.

So what’s next? Will Google be getting access to Twitter’s Firehose Feed, which would allow the company to search all Tweets as they happen and index them? I was assured that the upcoming announcement would answer all my questions, so maybe we’ll find out more about how Google plans to walk the line between what’s hot now, and what’s the most relevant answer to my current query.

I hope it also gives us a hint of Google’s planned audience for real-time search. Who is this kind of search useful for, and what’s the value added? Is it just for people looking to learn what’s going on, faster than they can read a newspaper, log in to a website, or even scan a blog post or RSS feed? What about someone doing research on the history of a vacation destination? Will real-time search trump regular old Google search?

We’ll try to post as fast as possible after Monday’s big announcement.

null

Tunnels Under Gaza

Photo credit: Ibraheem Abu Mustafa / Reuters

In this month's issue of IEEE Spectrum, I describe Gaza's jury-rigged power grid, and the challenges involved in providing electricity in a territory essentially cut off from the world. But ultimately, when the grid proves unreliable, people take matters into their own hands.

Just as war and the economic blockade have forced those managing the electricity grid to cope with the seemingly impossible task of providing steady power, ordinary Gazans have had to learn to improvise when possible. When electricity faltered during the war, residents exchanged tips on creative workarounds. A favorite is harvesting the low-level current that runs through the telephone lines to recharge cellphones.

But residents also rely on consumer goods—candles, batteries and generators—smuggled through the tunnels that connect southern Gaza to Egypt. Driving into a bombed-out section of Rafah, near the Egyptian border, I got a chance to see what, in the absence of a properly working grid, has by default become this option of last resort.

Because smuggling, in large part, relies on the other side turning a blind eye to the problem, smuggling anything through Israel would be impossible, and the consumer-goods tunnels all connect to Egypt. A large number of the tunnels had originated in houses (tunnels are typically owned by families), now smashed to rubble, but many tunnels are already open again, covered only in some cases by an impromptu shed to shield them from Israeli aircraft and drones that might spot them from above. We stopped to go down into the shafts of two tunnels—one for fuel and one for cement. (Tunnels are often designated for specific cargo. For instance, bigger tunnels are reserved for cement and other goods and are reinforced with wood; fuel tunnels can in some cases be smaller.) The tunnel owner asked that we photograph quickly; if Hamas security caught us, they would demand money.

During the 2008–2009 war, Israel bombed the tunnels, collapsing many of them, but a large number of the underground passageways remain, and new ones are constantly being dug. Smuggling operations quickly resumed.

From the Palestinian perspective, the tunnels act as a slow-release pressure valve, allowing just enough goods into the territory to prevent a catastrophe, but not enough to allow the economy to function normally. While Israel points to the illegal tunnels as another sign of Gazan—and particularly Hamas—dysfunction, Hamas naturally blames Israel. “If [the borders] are open, there are no tunnels,” Fawzy Barhoom, Hamas’s spokesman, says in an interview in his Gaza office.

In the meantime, the tunnels don’t help the power plant, says Rafiq Maliha, a plant manager, with a wry laugh. The idea of a sophisticated power plant using contraband fuel or parts is just unrealistic. “We are talking about a power plant,” he said. “We can’t smuggle [parts] through the tunnels.”

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More