Tech Talk iconTech Talk


A Pro-Antitrust Administration?

In 2001 the new Bush Administration snatched victory from the hands of U.S. Justice Department officials when it settled a three-year antitrust battle with Microsoft after the department had already won the case. The Justice Department’s antitrust division went into near-dormancy after that, but today, eight years later, antitrust is back in the headlines.

A month after the Justice Department began investigating antitrust claims against IBM and Sony disclosed it was under antitrust investigation for possible price-fixing, New York State attorney general Andrew Cuomo filed a major antitrust suit against Intel.

The Intel lawsuit comes hard on the heels of the European Commission’s record-setting fine of €1.06 billion (US $1.45 billion), imposed on Intel for antitrust practices, specifically for giving “loyalty” rebates or discounts to customers who agreed to use fewer (or none) of competitor AMD’s products. And despite a joint announcement yesterday that Intel and AMD are settling their differences and moving forward peacefully, the outstanding government lawsuits against Intel aren’t likely to be going away. The settlement “doesn’t change the evidence at all,” says John Peirce, an antitrust lawyer and partner at Bryan Cave.

Bloggers point out that New York’s Cuomo is using much of the same evidence as the European Commission, and more, with the U.S. Federal Trade Commission’s investigative aid.

But is the European example a good predictor for how the suit will go down in the U.S.?

In 2004, the EU's European Commission fined Microsoft US $613 million for violating EU antitrust laws, a decision the US DOJ criticized, rather than following suit. Of course, that was in the middle of the Bush Administration’s anti-antitrust fervor (or is it antitrust antifervor?).

More to the point, as pointed out at the time of the EU’s Intel case, there are some key differences between the EU and U.S. antitrust systems. While then-U.S. Justice Department antitrust attorney Tom Barnett took some heat for criticizing the EU Microsoft decision in 2004, his pronouncements at least

had the benefit of informing U.S. companies that they would not adopt the European approach, and that U.S. law was importantly different because it required a more rigorous form of economic analysis and more substantial evidence of consumer harm rather than speculative possiblity [sic] theorems coupled with harm to competitors.

Though Barnett was speaking from the Bush administration perspective, which was fairly lax on antitrust, one point is still worth noting: Obama’s Justice Department will have to make a case that consumers were substantially harmed by Intel’s actions during a time when PC and laptop prices plummeted.

So what about now? New York attorney David Mazur suggests that the European case against Intel still might not pave the way:

Individual judges and regulators have widely varying opinions as to the appropriateness of using foreign court decisions as precedent (or even as a definitive statement of the facts), and it’s very possible that Intel was implementing different strategies in dealing with European OEMs.

And Bryan Cave's Peirce points out that while the EU regulators expressed “a good deal of skepticism” about Oracle’s takeover of Sun Microsystems in April, the Obama Justice Department decided it wasn’t a problem, and the merger took place. “If they were going to be hawkish,” Peirce says, “I’d expect to see them aligned with Brussels. And they’re not.”

So while the Obama administration has indicated a more aggressive stance toward business practices of companies with huge market share, Peirce thinks “it’s too early to say” which way they’ll swing on antitrust. “They haven’t done anything yet,” says Peirce. And “a government investigation doesn’t mean that anybody has done anything wrong.”

Still, the road won’t be easy for Intel, and the NY AG’s suit is a major hurdle. According to the Huffington Post:

Technology analyst Rob Enderle said Intel may be facing a harder fight this time. He said the company worked through its problems "elegantly" in the 1990s, but has become "much more combative" in the past decade in its dealings with regulators. "This could represent one of the biggest dangers that Intel has ever faced," Enderle said.

If and when the administration decides to take concrete action on antitrust cases, Mazur concludes,

The Obama administration will not only have leeway to determine which companies are investigated, but will also influence how to deal with adjudicated monopolists. For example, though the Clinton administration had been pursuing a structural remedy in the Microsoft case (breaking the company up), the Bush administration took this option off the table, opting instead to pursue behavioral remedies (drawing up an agreement that limits the way the company can act).

Some business executives aren’t waiting to get nervous. In an interview this week with CNET, Google CEO Eric Schmidt acknowledged what CNET called the company’s “new role as the No. 1 target for U.S. antitrust regulators.”

Is Deep Brain Stimulation a Cure-all?

Is there anything that can't be fixed by burrowing an electrode array into the deep tissues of the brain? With varying degrees of success, deep brain stimulators have been used to temporarily defog clouds of chronic depressionstamp out migraines before they cycle out of control, and steady the movement of people with Parkinson's disease. Well, now you can add Tourette syndrome to this list of maladies.

A study in the current issue of Neurology follows 18 patients with severe Tourette syndrome for two years after having neurosurgery. During the procedure, an array of electrodes is inserted into the patient's thalamus. Wires running from the device connect to a pulse generator implanted just beneath the skin on the chest. All the parts are internal and, when activated, stimulate a highly targeted area of the brain.

Those who continued with the treatment (3 either dropped out or were removed by the researchers) showed a significant decrease in the motor and verbal tics associated with the disorder. In all but one patient, obsessive behaviors and symptoms of depression improved, both of which are commonly present in patients with severe Tourette syndrome.

The authors are refraining from drawing any conclusions until more controlled experiments have been conducted and they acknowledge the risks inherent in an invasive therapy like DBS.

But even the preliminary results beg the question: how can one treatment option yeild a good outcome for so many different disorders? And, overwhelmingly, the answer is, "we don't know." What we do know is which parts of the brain are best targeted for different disorders, suggesting that we know much more about the "where" than the "why" with DBS.

Andrea Cavanna, a lead author on the study, explains that "little is known about the patho-physiology of Tourette syndrome. However, dysfunction in the fronto-basal pathways seems to play a relevant role in tic generation and associated behavioral problems. The targeted thalamic nuclei are the final common outputs of the involved pathways."

So, they try it, and it seems to work. But I can't help thinking that researchers will be able to perfect therapies like deep brain stimulators only once we have fully explained the disorders they seek to treat. And that this, rather than the technology, is what holds us back.

Netbooks Are Only Part of The Solution

Netbooks are going to be huge, much bigger than they already are. Trust me on this. I say this not because I see more and more people working on them in cafes instead of on standard laptops—though I do. It’s not because I particularly want one—though for short trips I can see the appeal. It’s not because on a recent multifamily vacation one family showed up with one netbook per child.

It’s because my 70-something aunt, the one with the 30-year-old radio that you can only turn off by pulling the plug, and the TV that gets its signal from a 50-plus-year-old two-wire cable, just told me she’s thinking of getting a netbook.

Oh, it’ll be a couple of years before she actually makes the purchase, but the fact that she’s evening considering it is huge. The appeal for her is the cost, for sure—if it turns out to be a mistake, it won’t be a huge mistake. But what also is drawing her is also the fact that netbooks don’t look all that high tech. They don’t take up much room, they don’t have a lot of extra buttons on the keyboard, and they don’t do vast numbers of things she wouldn’t want to do anyway—like edit video or spend hours typing long documents.

But she has been thinking that it would be pretty cool to look up a fact she read somewhere but just can’t remember exactly, or check out a new medication prescribed by her doctor before she orders it. And that’s enough usefulness to make her part with $250 or so. Once she gets one, I’ll show her how she can keep up with all her grandnieces and nephews on Facebook, and she’ll be set.

Unfortunately, much as I would have liked to, I didn’t run out that moment and get her a new netbook. Because there’s one piece of this puzzle missing—some kind of community wi-fi access. It doesn’t have to be free, it doesn’t have to be fast, but it has to be there; easy to get to at a reasonable price.

Forget dial-up—netbooks don’t even come with built-in modems, and these days the bells and whistle of most web sites mean dial up is just too slow to be viable. Cable modem or DSL would mean new wiring in her home (she’s got one corded wall phone right now, no other jacks), and a box that would have to be installed somewhere, set up, and occasionally rebooted. I can’t see convincing her to go through that hassle and expense.

But community wi-fi would be perfect. She’d need nothing but the netbook, the monthly fee would be reasonable, and, while likely slower than cable or DSL, it’d be moving plenty fast for her needs.

Which got me wondering—what happened to community wi-fi, anyway? I called Sascha Meinrath, research director of the New America Foundation’s wireless future program. He told me that it’s been going great in Europe, but in 2004 or 2005 got sidetracked in the U.S. “The rationale of community wireless, bringing low-cost or free wireless to the masses, got usurped by the corporate model,” he says, with companies trying to figure out “how do we charge money for it.” And the corporations that cities contracted with to build low-cost systems didn’t have a lot of incentive to make those systems succeed, since they’d be competing with their own, higher cost Internet access offerings. Earthlink, for example, last year shut down its community wireless systems in Philadelphia and New Orleans.

The good news, Meinrath told me, is that community wireless in the U.S. may be starting a new surge. He sees encouraging signs in the efforts of Meraki, a Google-backed startup that’s building low-cost wireless networks for companies, universities, and communities, and other low-cost efforts. He’s starting to see municipal and community groups who looked at community wireless in the past but got put off by the apparently high costs getting ready to take another look at it. And, he says, the $7.2 billion in stimulus funds targeted at increasing broadband access can only help; he’s hoping communities will spend that money on low-cost open source systems instead of expensive proprietary systems to make it go as far as possible.

Now back to my aunt. She still wants that netbook—with Internet access, but without a box in her house. Community wi-fi may be coming, but not soon enough. So I’m thinking, next time I’m visiting I’m going to boot up my laptop and see if I’m picking up any signals; if I am, I’ll go knock on a few doors and see if I can borrow a cup of broadband.

Making Talking On A Cell Phone While Driving Just A Little Bit Safer

Even though we all know that talking on a cell phone while driving is not the safest activity in the world, we’re not giving it up. We are, of course, going hands free—in many states, that’s the law.

But short of turning off our phones all together, there’s not much else we can do. Or is there? Eyal Ophir and his colleagues in Stanford University’s CHIMe Lab (that stands for Communications Between Humans and Interactive Media) say there is. They’re currently writing up the results of their latest study on multitasking while driving.

This is the same group Stanford researchers who, in August, dropped a dime on multitaskers everywhere by proving that folks who think they are getting more done by juggling multiple tasks instead of focusing on just one are kidding themselves, and generated a media buzz.

This time, they looked at their favorite research subjects—multitaskers of two breeds, low and high. In general, they found, high multitaskers are easily distractible—that could be a good thing when they’re having a heated cell phone conversation yet catch a glimpse of something on the road ahead that could cause trouble. Low multitaskers are less distractible, which could mean that they’re more likely to focus on their cell phone conversation than the road.

The trick was to build some kind of tool that can help both types of multitaskers pay attention to their driving when it’s most critical.

I ran into Ophir at the unveiling of a new autonomous vehicle—perhaps the ultimate solution to the driver distraction problem, but one that won’t be parked in the average driveway anytime soon. He told me about the tool he came up with.

The concept—moving the voice coming from the cell phone around in the car. Not hard to do, with today’s virtual surround sound technology. Ophir designed a system that puts the voice up at the driver’s level when road conditions are relatively safe, then drops it down to the driver’s feet when conditions are more hazardous. He says he could have done it the opposite way and it appears that it would have worked equally well, but that research has shown that voices coming from lower than the speaker are less dominant, hence his choice of high and low. He tested the system with drivers in a simulator, and found that drivers quickly learned that a change in position of the voice meant, “Pay attention to the road!”  They later rated the cell phone conversation as less distracting when the sound was coming from their feet.

In the real world, Ophir sees this system linked to the driver’s GPS and a database of accidents, to identify potentially treacherous areas of road. Ophir expects to publish this research soon.

Stanford and Volkswagen Unveil An Autonomous Race Car

Yesterday, on the Stanford University campus, researchers unveiled Stanford’s third generation autonomous vehicle, a modified Audi TTS. Unlike Stanley, the university’s first generation self-driving car, and Junior, the second generation autonomous vehicle, this latest model won’t be competing against entries from other teams for millions of dollars in prize money, for the era of challenges sponsored by the Defense Advanced Research Projects Agency (DARPA) is over. Too bad for the Stanford researchers, because Stanford did well in the past, placing first in 2005 with Stanley and second in 2007 with Junior, both developed in cooperation with Volkswagen of America.

“The challenges were exciting,” said David Stavens, a PhD student in computer science who co-designed Stanley. “They inspired us to come together and advance the field.” And he will miss them. But on the plus side, without specific criteria and deadlines to meet, researchers now, Stavens said, “have a moment to step back and look at the whole problem,” which may lead to new insights. And, for Stavens personally, he’ll finally get to focus on writing his Ph.D. thesis and graduating.

Stavens did hint that some of the former competitors may be coming together to figure out a way to fill the vacuum created by Darpa stepping out of the autonomous vehicle challenge business, but couldn’t be convinced to say more.

Stanford’s latest autonomous vehicle, pictured above and right, is designed for the racetrack, not city streets. It zips along at the highest speeds it can manage and still stay in control. The sensors and algorithms built to do this will, researchers hope, eventually enable ordinary cars to operate more safely by knowing their limits and forcing drivers to stay within them.

The car will go through its paces live tomorrow as part of the dedication of the new Vehicle Automotive Innovation Laboratory at Stanford’s School of Engineering. Volkswagen Group of America donated $5.75 million to fund the lab, including $2 million for the building and $750,000 a year for five years to fund research and teaching.

Clean Tech Open Announces 2009 California Finalists

Today in Palo Alto, Calif., the 2009 Clean Tech Open honored six teams of entrepreneurs as California Regional Finalists. The competition began in 2005 as the California Clean Tech Open. This is the first year the competition extended outside the state; the California winners will go on to compete with finalists from other regions.

Still, being named as a California finalist—out of a field of 158 entries—is huge, and the folks picked today knew it. Four competitors sitting in front of me were literally holding their breaths waiting for the award in their category to be announced—and simultaneously exhaled when they heard their company’s name. It’s such a big deal because California finalists get $100,000 in cash and services to help them build their businesses, and because being named a finalist is validation of their ideas that brings them one step closer to getting the venture investment they need to make their dreams come true.

The California Regional Competition sends six teams on to the finals, one in each of the competition categories—Air, Water, and Waste; Energy Efficiency; Green Building; Renewables; Smart Power; and Transportation. The Pacific Northwest and Rocky Mountain competitions will each send three finalists on. A National Grand Champion will receive a $250,000 prize package.

While all six finalists announced seem to have solid business potential, two in particular generated a distinct buzz in the room when founders stepped up to describe their ideas—not chatter, exactly, but that feeling of the folks in the audience leaning forward just a bit to make sure they’re catching every word, the rustle of people rummaging for a pens to take notes.

Based on this buzz factor, I’d say the California startup with the biggest chance at the Grand Prize is Micromidas. The team, mostly recent graduates from the University of California at Davis, has developed a technology that turns raw sewage into biodegradable plastic. They intend to work with wastewater treatment companies to build the processing plants to produce the plastic. The plastic would be sold as packaging material for retail goods; consumers would the dispose of the packaging in home or municipal composting facilities. The price of the new plastic would be competitive with petroleum-based plastics. Micromidas is looking to raise $1 million in funding and expects have its first pilot plant up and running a year from obtaining that funding.

Also a possible Grand Prize contender, based on the audience buzz: Armageddon Energy. This solar-power-kit-in-a-box company actually grew out of the Clean Tech Open itself; its founders met as volunteers behind the scenes, where they helped set up chairs and click through slides, and then started talking about what it would take to enter the competition themselves. The team has designed a 1-kilowatt rooftop solar system designed to be sold in a flat-pack box and through big box retailers like Home Depot and Costco. While consumers could do some of the assembly and installation themselves, the final step—connection to the electric grid—would require an electrician to complete, although that electrician would not need any special solar installer certification. Along with the system, the company would sell dashboards and web services for consumers interested in closely tracking system performance.

The other California finalists were Alphabet Energy, tru2earth, Ecofactor, and Fuelsaver. National winners will be announced at an event in San Francisco on 17 November.

Read Spectrum’s coverage of past winners:
Crowning the Clean Tech Stars
California Clean Tech Open Wraps Up for 2007
Build a Kite Big Enough for a Ship and You’ll Save Fuel
Winner of Clean Tech Open Scores Again


Nobel Controversy: Willard Boyle denies Gordon's claims, proud of CCD work

Editor's Note: This is part of our ongoing coverage of the 2009 Nobel Prize in Physics. Read more about the Nobel Prize winners themselves, the Bell Labs engineer who patented the CCD imager, and the illustrious history of Bell Labs.


Last Monday, I asked Willard Boyle, who will share the 2009 Nobel Prize in physics with former colleague George Smith, a few uncomfortable questions:

Should Eugene Gordon have been on the patent for the CCD?
“I don’t think so. I don’t see it all. He didn’t have any entries in a notebook... I’ve no recollection of his having been around.”

Should Mike Tompsett be recognized for making the first CCD camera?
“Fair enough. He’s one of the three that made the first model. No problem there.”

What about those who complain that there is no physics to the invention of the CCD—that it’s purely a work of engineering?
“They should complain to the Nobel Committee. We’ve already had a nice engineering award for it.”

In a conversation with me it didn’t seem that Gordon’s claims had dented Boyle’s happiness in the least. I asked him what the week of the announcements was like:

“It’s been extremely busy,” says the retired Haligonian. He described the 05:00 wake up call from Stockholm as “a shocking moment.” His wife answered the phone: “Stockholm is calling.”

Since the call he’s been in contact with fellow winner George Smith: “We’re both happy. You get a nice comfortable feeling,” he says. (Several hundred thousand euros would indeed be comforting, in my opinion.)

Recalling the day he and Smith worked on the CCD concept he says that when he came home that night he told his wife: “George and I did something special today.”

Years later, he and Smith received a series of letters from major observatories around the world, thanking them for their work. “It made you feel good,” he says. He’s looking forward to going to Stockholm to accept the award.

Nobel Controversy: Smith dismisses Gordon claims, heard it all before

Editor's Note: This is part of our ongoing coverage of the 2009 Nobel Prize in Physics. Read more about the Nobel Prize winners themselves, the Bell Labs engineer who patented the CCD imager, and the illustrious history of Bell Labs.

(This was updated on 19 October.)


Did Eugene Gordon give George Smith the idea for the CCD? “Absolutely not,” says Smith. The Nobel Prize winning engineer says Gordon has made such claims before. “Needless to say, I get a little stirred up when I hear them.”

Gordon, Smith's boss at the time, says he gave Smith the concept behind the CCD.

Regarding the Boroughs paper Smith says: “This is a new one on me. He's made other claims before, but not that one.” Because Gordon has made claims against him before, Smith was ready to produce documents (at least, the first page of documents) he says show Gordon's signature under statements that Boyle and Smith invented the CCD.

(The quality of these documents is not so good, but they are mostly readable. They are from the 1970s and also are scans of faxes sent from Boyle's home in Halifax some time ago.)

At first read, these documents are not exactly decisive-none say anything like: “Boyle and Smith invented the CCD. Sincerely, Eugene Gordon”-but they are highly suggestive. (Smith informs me that the word invent would probably not be used before the patent is granted, anyway. Instead inventors would be called "originators.")

In a 6 November 1969 memo to Bell Labs patent lawyers from Gordon using the words "recent ideas by Boyle-Smith" with regard to the CCD. A memo dated the day before says that "W.S. Boyle and G.E. Smith described a technique [the CCD]". A third dated 23 October 1969 is also to a patent lawyer, but makes no statement that Smith and Boyle (or anyone in particular) are the inventors (or originators), merely that if the lawyer needs more information he should contact Smith and Boyle.

However, Smith calls the October letter the most decisive, because it is the original patent submission regarding the CCD. I'm following up to find out if there's some patent-speak meaning to being named in that section.

The memos are also fascinating in their own right, because two of them  focus on the idea of using a CCD as a display device rather than as an imager—an idea Smith credits Gordon with.

After dismissing Gordon's claims, Smith walked me through the invention of the CCD as he remembers it. The way he tells it, it's origins were strongly tied to the desire to build a better bubble memory. Sailing buddies Smith and Boyle worked in Bell Labs semiconductor division at the time, with Boyle in charge. Magnetic bubble memory was under development by a different division and “was a very hot item at the time.” Smith recalls Boyle worrying to Smith that funding for the semiconductor division would be diverted to magnetic bubble memory unless they could come up with a competing concept.

What followed was their magical afternoon's work. “A lot of people work for years and year and it never works out,” says Smith. “There's a certain amount of luck involved.”

Nobel Controversy: Eugene Gordon Claims He Gave Smith The Idea for the CCD

Editor's Note: This is part of our ongoing coverage of the 2009 Nobel Prize in Physics. Read more about the Nobel Prize winners themselves, the Bell Labs engineer who patented the CCD imager, and the illustrious history of Bell Labs.

IEEE Fellow and CCD camera chip inventor Mike Tompsett has already called into question the validity of Smith’s and Boyle’s claims to this year’s Nobel Prize in Physics, now his former Bell Labs boss, IEEE Fellow and Edison Medal Recipient Eugene Gordon, is backing Tompsett’s position and striking even deeper at Boyle’s and Smith’s contributions to as the Nobel Prize committee called it “the invention of an imaging semiconductor circuit—the CCD.”

“This is an outrage,” says Gordon. Tompsett invented, designed and built the first CCD camera, he says. “Smith had little to do with it. Boyle had nothing to do with it.”

What’s more, in a wide-ranging telephone interview Friday, Gordon told me that he gave Smith the concept behind the CCD, even going so far as to hand him an article by other researchers at Burroughs describing a similar shift register concept for a display device, before “sending him off to work out the numbers.”

The Borroughs paper described a shift register for moving a spot of light based on a three-phase clock. When combined with what Gordon’s group had already learned developing a silicon-diode-based video camera target, there was no great leap of logic to go from that idea to the CCD, he says. “We already knew about storing charge in the space charge gap between the oxide layer and undepleted semiconductor substrate. In that region you can move accumulated or stored charge many centimeters,” he says. All one had to do is set up voltages in a phased clock format to move the charge along “There was nothing to it except the numerical details.”

Smith worked out those details with the man who was Gordon’s boss at the time, Willard Boyle. And the two patented a CCD intended as a kind of bubble memory. Gordon’s name was left off the patent, without his knowledge.

“I had a fit, but there was no point in fighting my boss or destroying the patent for Bell Labs,” says Gordon. “That was just one more invention. I have a hundred patents myself.”

Gordon didn’t talk publicly about the events surrounding the invention for many years, while Boyle and Smith’s standing solidified. During the 1970s he was busy developing the electron beam photolithography mask maker still used in IC fabrication, and in the early 1980s he worked on the semiconductor lasers needed for transoceanic communications. Moreover, he could not speak about the CCD because he was advising Toshiba in their legal battle with Fairchild Semiconductor regarding the technology.

Long retired from Bell Labs, today, Gordon is the CEO of medical device firm Germgard Lighting, in Dover, N. J. It’s an “exciting” time for the business, which makes sterilization equipment to prevent hospital acquired infections, says Gordon.  So last week was “not as upsetting as you might think” for him.

So who should have gotten the Nobel, in Gordon’s opinion?

“It really shouldn’t be a physics award,” he says “Most of the work has been in electrical engineering. There’s no fundamental physics.”

If it had to go to someone for the CCD, it should have gone to Mike Tompsett alone, Gordon suggests.

“The whole thing is an outrage. But such outrages are perpetrated all the time.”

IEEE Spectrum hopes to be speaking with Willard Boyle and George Smith soon. Stay tuned.

Nobel Controversy: A Portrait of Bell Labs in the Mid-1960s

Editor's Note: This is part of our ongoing coverage of the 2009 Nobel Prize in Physics. Former Bell Labs engineer Eugene Gordon claims he gave George Smith the idea for the CCD. Read more about the Nobel Prize winners themselves, the Bell Labs engineer who patented the CCD imager, and the illustrious history of Bell Labs.

Gordon gave me some background on what was happening at Bell Labs at the time, that puts the invention in context:

In the mid-1960s, AT&T wanted to move from providing what’s called plain old telephone service (POTS) to offering businesses network services. The Federal Communications Commission, Gordon recalls, turned them down. AT&T’s plan B was to come up with an offering that would give them a back door into network services. They’d make a video telephone system; claim that it was merely an extended telephone service, which would be allowed by FCC, and sneak in broadband network services, because the Picture phone required broadband lines. The new service was called PANS (Picturephone and network services). Julius Molnar was the architect and he was brilliant.

The camera tubes of the day worked fine in television studios, but the uncontrolled lighting in an office environment easily blinded them. (The technology’s amorphous antimony trisulfide imaging targets were the problem.) Gordon says that he was given the responsibility to develop a camera tube that would not have the bright light burn-in problem of conventional vidicons. In early 1967 he had an idea for a silicon diode array picture camera tube that should do the trick.

Nobel nominee Willard Boyle, who was not his boss at the time, “crapped all over the idea and refused to help even though he had the lab and people who could build it,” he says. But Gordon, working through another part of Bell Labs, persevered.  He and his team had a product that went into manufacture in 1969. (One of the tubes was used to record the Moon landing that year, because it was impervious to the bright sunlight that had ruined other Apollo camera tubes.)

Gordon says he gave Smith the shift register idea following a gathering of the camera tube team Gordon had called to congratulate everyone on moving the camera tube on to manufacturing at Western Electric. “If Picturephone had been a success,” says Gordon. “The camera tube would have been replaced by Tompsett’s CCD camera.”


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More