Tech Talk iconTech Talk

Sensor Nation, UK Style

At the time of the July 2005 London bombings, the average Londoner was on camera an average of 300 times a day. That didn't prevent an attack, of course. Surveillance cameras — the UK had an estimated 4.2 million of them in 2005 — can help the police solve a crime after it's committed, but no one is looking at them in real time.

A organization named Internet Eyes would change that. It has developed a scheme that would allow ordinary citizens to watch CCTV (closed-circuit television) video streams and sound an alarm when they see things that are, well, alarming. Internet Eyes holds out a reward of £1000.

According to the BBC report that broke the story, today London has "one camera per 14 people" and yet the police have "estimated that in 2008 just one crime was solved per thousand CCTV cameras in the capital. The deficit was partly blamed on officers not being able to make the best use of the many thousands of hours of video generated by CCTV."

It's hard to imagine the Internet Eyes program, which is slated to begin next month in Stratford-upon-Avon, will do much to improve that. While news reports make it sound as if all you have to do is see a crime and report it to collect your £1000, the reality is a lot more complicated and a lot less lucrative.

Viewers are anonymously monitoring random video feeds streamed from privately owned establishments. At no time can Viewers designate or control the video feeds they receive and the locations of the feeds are not disclosed.

The instant a Viewer monitors an event, an alert can be sent directly to the owner of that live camera feed. The alert is sent along with a screen grab, identifying the image you have observed. Only the first alert received by the camera owner is accepted.

The camera owner will then feedback (rate) the result of the alert. Their feedback is converted into points and entered into a Viewers monthly league table. At the end of each month the highest scoring Viewer will receive the reward money; this could be split in the event of a tie.

Viewers register for free with no recurring fees. Each Viewer has 3 x alerts per month allocated to their account for free. Viewers are able to ‘top up’ their alerts through PayPal if they so desire. The free allocations of alerts are limited to prevent system abuse.

So, for one thing, you don't get to choose your feed. Even if you would probably be a more attentive viewer viewing your own neighborhood or a favorite store, you probably will never get that chance. Then, your success depends on the utility rating the camera owner gives to the alert you send — that, plus an opaque points program. Finally, only a single thousand-pound prize is awarded, no matter how many crimes are "solved" by CCTV alarms that month. (Oh, and having to pay for your own alarms after the first three — nice touch. Still, people pay for their votes on American Idol, without any prospect of an award, so maybe they'll pay to play here as well.)

"Solved" is in quotes because it's not clear what it means for a crime to be solved by a CCTV camera. Back in August, privacy expert and security entrepreneur Bruce Schneier noted,

To me, the crime has to have been unsolvable without the cameras. Repeatedly I see pro-camera lobbyists pointing to the surveillance-camera images that identified the 7/7 London Transport bombers, but it is obvious that they would have been identified even without the cameras.

Then there's the question of how well people will assess that a crime is being committed by seeing a few seconds of grainy video stream by their eyes. Football referees often can't determine that a facemask was grabbed, or that a foot stepped out of bounds, when they watch higher quality video streams than those of the average street camera.

The BBC report noted that crimes aren't noticed in real time today because "viewing hours of mostly tedious and often poor quality images is a lengthy and unpopular job." If that's true when people who are doing it receive a regular paycheck, does it get any less tedious when done in the hope of a windfall hardly more likely than winning the lottery? (Still, people pay good money to engage in the same repetitive motions that factory workers have complained about for decades, in the hope of hitting a slot-machine jackpot.)

If the program is unsuccessful, few people will be watching. but if the program is successful, will the camera owners, and ultimately the police, have enough staff on hand to evaluate an alarm quickly enough to apprehend a suspect at the scene of the crime? It seems unlikely. The bottom line is that probably some people will watch the video streams, the police will be alerted to a couple of crimes, and Londoners' privacy will continue to erode.

Exactly one year before the London bombings, Spectrum published a special report entitled "Sensor Nation." In an article entitled "We Like to Watch," my colleague Harry Goldstein presciently wrote,

For entertainment, we gather in front of the tube for mass-mediated group therapy sessions called reality shows. Hundreds of millions of us around the globe tune in to watch people who eagerly endure excruciating plastic surgery; stab each other in the back for a chance to work for Donald Trump; or wolf down sea worms, cockroaches, and worse to survive on a desert island. For Generation Y, "Big Brother" is a reality television show, where, for a chance at winning half a million dollars, contestants volunteer to be cooped up in a house with total strangers and have their most private moments broadcast to a hungry audience.

It's not hard to imagine a near future of reciprocal transparency when all of us are watched and can watch right back. We're halfway there.

History's First Draft

The eyes of the world have moved to U.S. President Obama's surprising Nobel Peace Prize, but the controversies over the Physics prize just keep coming. And it's not just limited to CCD. As my colleague Sue Karlin pointed out to me in an e-mail, an Indian news site has raised questions about fiber optics.

We at Spectrum received our fair share of letters and comments about the half of the prize that went for the charge-coupled device. Our initial article "CCD Camera Chip Pioneers Share Nobel" by contributor Neil Savage led to a fruitful conversation between news editor, Sam Moore, and Michael F. Tompsett, an IEEE Fellow and one-time colleague of newly-minted Nobelists Boyle and Smith ("Nobel Controversy: Former Bell Labs Employee Says He Invented the CCD Imager") as well as a further discussion between Savage and Carlo Sequin, yet another Bell Labs CCD researcher ("Nobel Controversy: Who Deserves Credit for Inventing the CCD?").

Yesterday, published an exhaustive compendium of Nobel Prizes that arguably should have gone to Indian physicists, including:

. Jagadish Chandra Bose (wireless signaling before Marconi as well as anticipating the 'n' and 'p' type semiconductors);

. Satyendranath Bose (Bose-Einstein statistics);

. G N Ramachandran (bio-molecular structures, especially the triple helical structure of collagen); and

. E C George Sudarshan (quantum optics);

Which brings us to this week and "How India missed another Nobel Prize":

What the Academy omitted to note was that Moga, Punjab-born Narinder Singh Kapany, widely considered the Father of Fibre Optics, and, in this capacity, featured in a 1999 Fortune magazine article on the 'Unsung Heroes of the 20th Century', had far the stronger claim.
Charles Kao in a 1996 paper put forward the idea of using glass fibres for communication using light; he tirelessly evangelised it and fully deserves a share of the Prize. However, the fact remains that it was Kapany who first demonstrated successfully that light can be transmitted through bent glass fibres during his doctoral work at the Imperial College of Science in London in the early fifties, and published the findings in a paper in Nature in 1954.

The article is written by Shivanand Kanavi , who, according to his blog bio, is a theoretical physicist cum academic cum economic consultant cum business journalist. He's the author of a book, Sand to Silicon: The Amazing Story of Digital Technology, which he quotes from in making the case that Kapany deserves half of the half-prize that went for fiber optics.

Narinder Singh Kapany recounted to the author, "When I was a high school student at Dehradun in the beautiful foothills of the Himalayas, it occurred to me that light need not travel in a straight line, that it could be bent. I carried the idea to college. Actually it was not an idea but the statement of a problem. When I worked in the ordnance factory in Dehradun after my graduation, I tried using right-angled prisms to bend light.
"However, when I went to London to study at the Imperial College and started working on my thesis, my advisor, Dr Hopkins, suggested that I try glass cylinders instead of prisms. So I thought of a bundle of thin glass fibres, which could be bent easily. Initially my primary interest was to use them in medical instruments for looking inside the human body. The broad potential of optic fibres did not dawn on me till 1955. It was then that I coined the term fibre optics."

Giving credit where it is due is hard. Kanavi does a good job of tracing the idea of bending light all the way back to the 1840s. In Savage's follow-up article Carlo Sequin teases out the contributions of six different Bell Labs researchers.

If journalism is, as the journalist Phil Graham once said, the "first rough draft of history," then the Nobel committee is in the uncomfortable position in between: trying to do a journalist's job with the full weight of historical accuracy on its shoulders. And so the current controversies have a silver lining: we get to read about the contributions of the many other brilliant researchers who contributed to these marvelous ideas and life-improving technologies.


Nobel Controversy: Former Bell Labs Employee Says He Invented the CCD Imager

Who patented the first digital imager? Michael F. Tompsett says he did. (Although the signal is only digital after it’s gone through the video analog-to-digital converter chip that he also invented.)

Did Willard Boyle and George Smith invent the charge coupled device? “Their name is on the patent,” says IEEE Fellow and former Bell Labs colleague of the pair of new Nobel Prize Winners,“but all patents are a product of their time and others may have had an input.”

But did they invent “an imaging semiconductor circuit” as the Nobel citation goes? No, he says. “That was me.”

The CCD that Boyle and Smith invented was not for imaging, it was intended as a memory circuit. According to both Tompsett and the United States Patent Office, it was Tompsett who invented the imager that first demonstrated the electronic photography and video in use today. Tompsett is the sole inventor listed on United States Patent Number 4,085,456 “Charge transfer imaging devices.” The patent covers two, subtly different, types of imagers one of which is the CCD imager.

“All the imaging and reduction to practice was me,” says the physicist who in the 1970s ran Bell Labs’ CCD group, which developed TV resolution imagers.

Tompsett had been an imaging guy even before he arrived at Bell Labs. In England he invented an infrared camera tube, which was subsequently developedt, and used by the U.S. and British militaries, fire brigades and search and rescue teams, and won a Queen’s Award in Britain, he says. He also invented another silicon-imager that “never saw the light of day” because it was quickly eclipsed by the CCD. And he also came up with a solid-state thermal imager that’s now been commercialized for night vision.

But imaging isn’t his only important contribution. He also developed a technology that was key to growing the gallium arsenide layers of early LEDs and is still in use today. He invented the first solid-state MOS modem and a video analog-to-digital converter chip that is now manufactured by the millions. He now works on healthcare software as founder of Theramanager, in New Providence, N.J.

“I don’t have to hang my reputation [on the CCD imager]” he says, but “it would be nice to at least share the credit.”

You’d expect this to be a galling time for him. Even the picture he’s confronted with in newspapers and this web site is an affront: a staged photo of Boyle and Smith manipulating a camera in 1974. Neither Nobelist was involved with Bell Labs imaging chip work at the time, and Tompsett himself built the camera they are supposedly working with. He was keen to acknowledge the contributions of Ed Zimany and the rest of his group, particularly Carlo Sequin who joined Bell Labs 9 months after the invention and helped refine the imaging chips. Together, Tompsett and Sequin also wrote the first book on CCDs.

But talking to him the morning of 8 October, he seems more concerned with technical inaccuracy in an IEEE Spectrum article than his place in history. With regard to getting a Nobel Prize he says: “I hadn’t seriously thought about myself.”

“You’re not going to change [who wins] the Nobel,” he says. However, he does believe the citation should be corrected.

Image from Tompsett's CCD patent.

Post modified and updated on October 9, 2009

Nobel Controversy: Who Deserves Credit for Inventing the CCD?

Editor's Note: This is part of our ongoing coverage of the 2009 Nobel Prize in Physics. Read more about the Nobel Prize winners themselves, the Bell Labs engineer who patented the CCD imager, and the illustrious history of Bell Labs.

So who deserves the accolades for inventing the charge-coupled device? "It depends on what you're celebrating," says Carlo Sequin, who joined the team at Bell Labs developing the CCD a few months after the project began.

"My initial assumption was the Nobel in physics goes to fundamental concepts," says Sequin, now a professor of electrical engineering and computer science at the University of California, Berkeley. "If the fundamental concept was the charge transfer principle, then that goes to [Willard] Boyle and [George] Smith, and maybe Gene Gordon."

But while Boyle and Smith, who were initially trying to design something analogous to magnetic bubble memory for computers in silicon, sketched out the charge transfer concept, they were not the ones who actually built the CCD, Sequin says.

"If we try to find out who made the first practical image sensor, credit would go to Mike Tompsett, possibly [Gilbert] Amelio," he says.

The concept for the CCD was that one could build a potential well in silicon by creating a capacitor out of silicon, silicon oxide, and a metal electrode. Light striking the silicon would be absorbed and would create an electron, which would travel to the well. By applying alternating voltages, you could then moves the accumulated charges from one well to the next until they reached the edge of the chip, where the amount of the charge would reveal the intensity of the light striking each well. 

In the first design, as Sequin describes it, rows of pixels were lined up in columns, and the charges would move down the columns, from one row to the next, until they were read out in a register at the bottom. This design meant the device would take an exposure, close a shutter, move the charges down one row, and then open the shutter for the next exposure. That meant that every other frame would be spent moving the charge.

Tompsett solved the issue with the frame transfer principle. "He came up with the idea, why not double the size of that image sensor," Sequin says. Only half of the sensor would be exposed, and then the next frame of exposure would be taken while the charge was being moved. "As far as I know, he has the patent on it and he is the sole author of it, and it was his idea," Sequin says.

He says others, including himself, contributed to the development of the CCD. Amelio made the first 8 x 8 pixel chip. Sequin was heavily involved in turning that into a 128 x 128 pixel array. Walter Bertram came up with the idea of building the chip with three layers of polysilicon replacing metal as the electrodes. Since they're transparent, they could be deposited on top of the chip without blocking the incoming light. Putting them in different layers prevented them from shorting out in a way that would destroy the whole device; at most you'd lose a pixel or a column.

Sequin says two technical associates, Edward Zimany and William McNamara, were also heavily involved in developing a practical CCD. "It was six engineers who took a very fundamental principle and really made a practical thing of it." He likens Boyle and Smith to sperm donors, providing the seed of the idea, while the others were midwives and mothers who nurtured it into reality.

But the credit for the ubiquity of CCD-based cameras may belong overseas, he says. Bell Labs, which had been focused on creating a picture phone, eventually dropped the idea as impractical and stopped developing CCDs, Sequin says. Fairchild Semiconductor took up the challenge for a while, but also let it go. "Everything lay kind of dormant for another 10 years, until the Japanese picked it up," Sequin says. It took Japanese researchers at Sony and other places another five years to perfect the device. "The Americans dropped the ball," he says.

Energy-Efficient Appliances With Minds Of Their Own

Ten residences now under construction in Masdar City, Abu Dhabi, will be getting home appliances—refrigerators, cooktops, and clothes washers/dryers—with minds of their own. These appliances will be talking to the electric grid and the grid will talk back. Based on what they’re hearing, the appliances will adjust their behavior, with the goal of minimizing their demands at peak energy times and therefore saving their owners money. It’s an experiment designed cooperatively by Masdar, an Abu Dhabi based renewable energy company, and GE Consumer & Industrial. Good for the environment, good for the pocketbook, all good, right?

Uh, maybe. Unless I imagine myself as the homeowner in the midst of one of these supersmart homes, trying to inject myself into the dialog.

Grid, “energy prices are up.”

Refrigerator, “OK, raising internal temperature”

Me, low on ice and expecting guests for dinner, “Uh, does this mean I have to drive to the store and buy ice?”

Grid, “energy prices will drop at 11 pm”

Clothes dryer, “drying cycle paused until 11:05”

Child, “Mom, what do you mean my soccer uniform is wet? I need it now!”

GE does envision an override button, that is, the appliances will respond automatically, but if you happen to notice that the water you put on for pasta has stopped boiling, you can hit an override button. It’s highly unlikely, however, that I’d notice things weren’t going as expected until the guests arrived/the soccer game was about to start/I had called the family to dinner.

So I’d like to add one little tiny feature to this new technology—ask permission first. Text me, tweet me, or send me a message on Facebook—it shouldn’t be too hard in the midst of all this other communication these appliances will be doing to make sure they simply say please.

Long Live Bell Labs

It didn't take long for Wikipedia's Bell Labs entry to be updated to include a seventh Nobel Prize for the storied organization.

1937 Clinton J. Davisson (the wave nature of matter)
1956 John Bardeen, Walter H. Brattain, and William Shockley (transistors)
1977 Philip W. Anderson (electronic structure of glass)
1978 Arno A. Penzias and Robert W. Wilson (cosmic microwave background radiation)
1997 Steven Chu (cooling and trapping atoms)
1998 Horst Stormer, Robert Laughlin, and Daniel Tsui (fractional quantum Hall effect)
2009 Willard S. Boyle, George E. Smith (CCD)

The first six are included in a memorable and entertaining video that the PR department at Lucent Technologies produced during the dot-com bubble, when Lucent and the rest of the telecommunications world was riding high. (It didn't take long for Lucent to be brought down low by the corresponding bust, and right around the time the next bubble was at its height, it was bought by Alcatel.)

The digital camera, among many other innovations, couldn't have happened without CCD (though a lot of them use CMOS these days). The seminal, if not the first, digital camera was based around Kodak's 1986 KAF-1300 Image Sensor, which made Spectrum's list of 25 Microchips That Shook the World."

Kodak's DCS 100 cost about $13 000 (in 1986 dollars), but digital cameras are so cheap and ubiquitious these days that this month's Spectrum do-it-yourself project, a Google Street Maps-like camera array, can use eight of them (and set you back a mere $200 for all of them).

As the 2009 prize shows, Bell Labs' innovations have long outlived the original AT&T and Lucent, an idea that was driven home by technology historian Michael Riordan in a July 2005 Spectrum feature, "The End of AT&T: Ma Bell may be gone, but its innovations are everywhere."

Spectrum's coverage of Bell Labs is too sprawling to offer in its totality, but the above list of Bell Labs' Nobel Prizes wouldn't be complete without the one that got away, memorably described in another Riordan feature, "How Bell Labs Missed the Microchip."

A trip down this particular memory lane also wouldn't be complete without at least one reflection from one-time Bell Labs executive and all-time Spectrum columnist, Bob Lucky: "When Giants Walked the Earth."

Congratulations, Willard Boyle and George Smith. And thank you, Bell Labs. May we somehow, somewhere, someday see another such engine of innovation.


Back In The Clear: Airport Security Service To Resume Operations

Today the New York Times reports that Clear, a company that advertised itself as letting you sail through the security line, is coming back. The company, which died of unknown causes back in June, could resume operations by the “holidays.”

Danny Sullivan at was a believer:

Clear, or Fly Clear as it was sometimes known, allowed people to bypass regular security at some airports for an annual fee. I’ve been a regular user since it started. In fact, I was probably one of the program’s most successful affiliates. I’d written about it from an early point, and so many people used my code to get an extra month (and giving me one in the process) that my card was good through 2064.

Clear evolved from the Transportation Security Administration's Registered Traveler program, which lets companies establish exclusive security lines at airports. Verified Identity Pass, Inc., Clear's parent company, gobbled up the largest market share with 18 of the 21 airports with reserved security lines. Lockheed Martin is the systems intregrator, which I assume means they transistioned the technology out of DHS? Gave Clear lots of money? Who knows what's what in the Homeland Security sausage factory. But I digress.

I was getting all excited about maybe shelling out the $199 for a Clear card, but the more I read, the less I understand.

First off, the Clear card merely lets you cut in front of others in line. It does not let you skip the shoes-and-baggie part. According to one experienced Clear user in the techcrunch comments section, Clear doesn't mean you can board with more than the prescribed 3-1-1:

Just so folks understand – you didn’t get to skip the security “experience”. You simply had priotity access to the checkpoint itself. Still had to remove your shoes, take off your coat, your 3-1-1 plastic bag, laptop out, etc.

So are you "clear" or aren’t you? Now that they have your fingerprints and eye-scans in a database, you get to skip the line, but you’re not so verified that they’d have confidence in you not to smuggle a bomb into your laptop or face cream? And why do you need to show your card if you're also swiping a fingerprint at the kiosk and showing a government-issued ID? For all that, you don't even get to bring an extra ounce of shampoo.

OK, so even if the only benefit is skipping the 50-minute line at Dulles (shudder), I understand the $199 fee. I’d be willing to shell out some cash and my digital identifiers if I could avoid having to remove my shoes, take out my laptop, take out A/V equipment, take off my coat, place my pathetic baggie into a separate bin (god I’m exhausted just thinking about it). But why do we need fingerprints and iris scans on top of the $199 to slip through security slightly faster than a 1K Platinum Global Elite Demiurge Ubermensch?

The more I think about it, the more paranoid it makes me. I realize that there’s only so much privacy you can realistically expect from now on. Even if I don’t cough up my finger- and eye-prints, my online medical records will still be stolen, and if they’re not online, some enterprising hobo will still find them in the trash can out back of the doctor’s office. And for all you off-the-grid militiamen out there, cancelling your doctors appointments won’t help either: In 2006, USA Today reported that 87 percent of Americans could be identified by records listing solely their birthdate, gender and ZIP code. The only thing that makes me feel better is that there is still safety in numbers.

So now for my questions to you, dear reader.

1) Do any Spectrum Online readers have Clear memberships? Is it worth it to you? How long did the fingerprint- and eye-scan process take? Was it convenient, or did you have to schlep to an anonymous office park in McLean, Virginia?

2) Is my contention reasonable: that in an age where everyone is online and catastrophically oversharing, the information pool is so huge and overwhelming that the probability of MY information being stolen is statistically insignificant?

3) In terms of airport security bypassing systems (like the UK’s Iris Recognition Immigration System (IRIS), CLEAR and FLO) do you prefer a card, or an IRIS-like system where you can forget the card and the picture ID and instead go into a small room where you look into a mirror and your eye is scanned and your eye is the only ID you need? What’s the word on laser scarring on the inside of your retina?

The Robot Insect Race

Last week, Danger Room reported that DARPA's cyber-insect race has yielded a tangible result: a live beetle that can be steered remotely.

And there's video:

Sharon Weinberger at Danger Room reports:

Berkeley scientists appear to have demonstrated an impressive degree of control over their insect’s flight; they report being able to use an implant for neural stimulation of the beetle’s brain to start, stop, and control the insect in flight. They could even command turns by stimulating the basalar muscles.

Spectrum reported on the topic back in February, when DARPA-funded researchers presented a cyborg moth outfitted with a brand new radio at the normally bone-dry International Solid State Circuits Conference.

DARPA is funding all kinds of work that will result in cybernetic insects, or hybrids of biological and electronic bugs, whose implanted electronic controls make them respond to remote control. The defense community is interested in this new toy because it wants the animal-machine hybrids to transmit data from mounted sensors, like video and audio surveillance or gas sensors.

The main part of the research is growing living pupae around MEMS electronics. That way, once the moth or beetle grows up, it can be remote controlled by sending impulses to the implanted electronics. You can see in this video that applying voltages causes the insect to feel the need to turn left, turn right, stop and start flying.

To achieve that goal, HI-MEMS research is split along three separate tracks: growing MEMS-insect hybrids, developing steering electronics for the insects, and finding ways to harvest energy from the them to power the cybernetics. Moths and beetles have been grown with implanted electronics, and both have been shown in flight, but none have yet been demonstrated that can have a long lived energy source.

Last year, the New Scientist got some video of the cyborg moth, a precursor to the cyborg beetle (still attached to a tether as opposed to the free range cyber-beetle). As Sandrine Ceurstemont reports, "combining living and machine components could eventually make robotic systems more effective."

How nice for them. I think everyone needs to take a deep breath and read this.

Why the Evidence of Water on the Moon is Bad News

The big science story this week was the confirmation that the moon is covered in water. And not just in the shadowed, polar craters where scientists suspected it, but all over. Water on the moon--that has to be great news, right?

Not really. While unexpected discoveries are always interesting scientifically, this one is actually bad news for space exploration. It would have been better if lunar water wasn't quite so ubiquitous.

When I previewed NASA's LCROSS mission last March, I noted that its purpose was not really in determining whether the moon had water, but in determining how the water was distributed. Even though LCROSS is still scheduled to smash into the Moon's surface in a couple weeks, it now seems likely that it will only confirm this week's findings: that there are trace amounts of water everywhere on the moon.

For human exploration, highly concentrated deposits of ice would be a much more useful distribution. The possibility of mining water on the moon has often been cited as both the means and the end for sending people back to the moon (Bill Stone, for example, has been a big proponent). Even with the possibility of big deposits, though, I was always skeptical that mining lunar water ice could ever be efficient enough to be worthwhile. Now, it seems unlikely that any usable amount of water can be extracted from the surface. Geologist Robert Clark estimated that one ton of lunar soil might yield "as much as 32 ounces of water" (a little less than a liter). That means it will take a lot of work to get a little liquid, even with the help of innovative suction robots.

I'm now interested to see how this finding is used as political spin. Back in the Bush years, NASA's Constellation program promised a fully-functioning moon base within two decades (the program continues to move forward, but its future remains in limbo). Constellation's ultimate destination, at least in theory, was Mars. But, in practice, the moon quickly became the primary target. The agency then needed to come up with a bunch of justifications for going there, after the fact. The prospect of mining lunar water, always seemed l one of these. Regardless of the impracticality of collecting it, the water finding may energize Moon proponents. A article declared that "Water Makes Moon Suddenly a More Attractive Destination." The New Scientist  similarly raised hopes:

Newly confirmed water on the moon could help sustain lunar astronauts and even propel missions to Mars, if harvesting it can be made practical.

In this summer's Mars special report, we examined whether going back to the moon before Mars is even necessary. While a moon program may make us better prepared for the longer and difficult journey, it could  easily turn into a wasteful disatrction that eats up valuable time and resources. Frankly, the chance of finding evidence of life on Mars will always make it a more attractive destination than our close, but definitely dead, satellite. And if it's water you're really after? Mars has more of that, too.

Top Image: NASA's Cassini spacecraft observations of the moon on Aug. 19, 1999 show water and hydroxyl at all latitudes on the surface, even areas exposed to direct sunlight. [Credit: NASA/JPL-Caltech/USGS] Middle Image: These images show a very young lunar crater on the side of the moon that faces away from Earth. [Credit: ISRO/NASA/JPL-Caltech/USGS/Brown Univ.] Bottom Image: A fresh, 6-meter-wide (20-foot-wide) crater on Mars on Oct. 18, 2008, (left) and on Jan. 14, 2009. [Credit: NASA/JPL-Caltech/University of Arizona]

If There's An Innovation Gap, Where Is It?

A BusinessWeek article this week, "Turning Research into Inventions and Jobs," asserts that there's plenty of basic research in the world. What there's not enough of, the authors assert, is products - products that exploit this research.

But too often overlooked in discussions over research spending is a fundamental fact: We've already got an abundance of research. The next transistor, semiconductor, or breakthrough in MRI technology may already have been discovered. The problem is, we've dropped the ball on translating this science into invention. The vast majority of great research is languishing in filing cabinets, unable to be harnessed by the entrepreneurs and scientist-businesspeople who can set it free. We consider this shortfall academia's equivalent of Alaska's "bridge to nowhere."

The article, by Vivek Wadhwa and Robert E. Litan, was written in disagreement with an earlier BusinessWeek article, "How Science Can Create Millions of New Jobs," which asserted that "Reigniting basic research can repair the broken U.S. business model and put Americans back to work." I think Judy Estrin might agree with that, and she would certainly disagree with the new article.

Almost exactly a year ago, Estrin's book, Closing the Innovation Gap, was published by McGraw-Hill Press. Andy Grove of Intel called it “A passionate look at innovation by a proven innovator concerned about the level of short-sightedness surrounding us.” Grove is right — it's a topic Estrin is remarkably qualified to talk about. She's now at Stanford, but back in the day she co-founded seven different technology companies. When one of them was bought by networking goliath Cisco Systems she became its chief technology officer. (Chapter Two of the book, "The Innovation Ecosystem," is available here.)

I did a long podcast interview with Estrin when the book came out. In it, she asserts just the opposite of what Wadhwa and Litan says. Her view is that there is a dearth of fundamental research, in fact, that we're still living off the seed corn of the 1960s and 1970s - and it's running out. Here's a snippet from the interview.

SPECTRUM: Engineering and innovation are in your blood. Your mother was the second woman ever to get a Ph.D. in electrical engineering.... And your father taught at UCLA — he helped start up the computer science department. You went there in the early 1970s; you were on Vint Cerf's research team as the Internet Protocol was being invented there. But you say in your book that was also the time innovation started to decline.
ESTRIN: It started in the 70's with a narrowing of the horizons in the research community first and that's where we began to pull back on planting the seeds for the future. It came in a variety of ways, it came in terms of government funding for research, not just the magnitude of the funding but how it was allocated, and the types of funding that were coming out of the government agencies, and how it was allocated amongst different fields of science.
But the other thing that happened in the 70s and 80s is that corporations began to focus on becoming more and more efficient and more and more productive; a good thing and on the surface you would say. Of course they need to do that but as they did and as they started to focus on productivity and efficiency they essentially took all the slop out of the system and often innovation happens, comes out of some of those inefficiencies. And they became so efficient that people began to invest just for the short term and in order to have sustainable innovation, you have to be wiling to invest in things that you don't know what the outcome is going to be, that you don't know is going to succeed. And as corporations became more efficient they cut back on investing in things that didn't have direct correlation to their quarterly or this year's earnings.
So we stopped planting seeds for the future not just in research but in corporations for a while. The startup ecosystem was still thriving, so a lot of innovation was coming out of Silicon valley and other places where startups thrived.
But when we hit 2000 and the bursting of the internet bubble, the corporate scandals and the tragedy of 9/11 we saw a shift here in Silicon Valley of people becoming more risk averse and more short term focused. So as a result I have people coming and saying to me “well come on Judy there's been lots of innovation over the last couple of years look at the iPod, look at consumer internet, look at what's happened in biotech.”
And yes there is still innovation going on my claim is not that there is not but the innovation we're doing is tending to be more incremental and is built upon years and years of seeds that were planted but we're not continuing to plant enough seeds to sustain us out 10, 20, 30 years from now.
I have a quote in the book that I really liked. When I interviewed Mark Andressen, who developed the initial browser, he was telling me how quickly he was able to bring the browser to the market, and I looked at him and just said “That was an incredibly short time” and he said “you know the browser we developed was just the icing on a cake that had been baking for 30 years.”

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More