Tech Talk iconTech Talk

NASA: Satellite Failure Likely Caused by Balky Nose Cone

The U.S. space agency said today that the dramatic failure of one of its most expensive satellites minutes after liftoff was caused by a faulty fairing, the clamshell structure that encapsulates the science payload in the nose cone.

In a hastily prepared statement, NASA noted that the Orbiting Carbon Observatory was compromised shortly after its 4:55 a.m. EST launch from California's Vandenberg Air Force Base. The space agency said that the fairing on the Taurus XL launch vehicle failed to separate, trapping the satellite inside without power.

John Brunschwyler, the Orbital Sciences Corp. program manager for the Taurus XL, noted that the compromised upper stage landed in the Antarctic Ocean uneventfully.

(You can view NASA's post-failure press briefing by clicking here.)

The Orbiting Carbon Observatory had been dedicated to monitoring global warming by carbon dioxide from space. The half-ton satellite costs US $280 million and required nine years of work to create.

"Certainly for the science community it's a huge disappointment," Brunschwyler lamented. "It's taken so long to get here."

NASA's Orbiting Carbon Observatory launch manager Charles Dovale added that today's accident should probably postpone this summer's launch of another U.S. satellite, Glory, which will be used to measure soot and aerosols in the atmosphere.

The NASA statement said an investigation by its Mishap Investigation Board will be opened soon to determine the cause of this morning's launch failure.

Some Morose Thoughts about the Late Konrad Dannenberg

Last October I conducted what I believe was the last interview anybody did with Konrad Dannenberg, a senior member of Werner von Braunâ''s team who contributed significantly to the V2, Redstone, and Saturn programs. Dannenberg did not break any new ground in the conversation, but in light of an obituary that appeared in yesterdayâ''s New York Times, I feel I should offer one small correction, at risk of causing some pain.

The Times obit mentioned Dannenbergâ''s defense of his friend and close colleague Arthur Rudolph, who was general manager of the Saturn program that was responsible for putting the first men on the Moon. Later, it was revealed that Rudolph had been production supervisor for the notorious Mittelwerk plant in Nazi Germany, where the V2 was manufactured, and where hundreds of slave laborers were worked to death, sabotage was rife, and suspected ring leaders were sometimes hung from the rafters over the rocket assembly lines. When these facts came to light after the Moon landings, Rudolph had to leave the United States to escape prosecution as a war criminal, and was stripped of his U.S. citizenship.

Though Dannenberg was among those who petitioned President Ronald Reagan to restore Rudolphâ''s citizenship, the Times says that he was not personally a member of the Nazi Party. This is not what he told me. Defending Rudolph, he pointed out that membership was advantageous. And, â''Donâ''t forget that in the beginning Hitler did quite a bit of good for Germany.â'¿When I was in university Hitler came to power and he, of course, reawakened manufacturing in Germany. We had a very tough time, high inflation, in the 20s, and all these tough conditions. Then he called for a new government, like we have similar problems here now [in October 2008, in the United States].â''

So, did you join too, I asked?

â''I was a party member for that reason,â'' he said.

I parted ways from Dannenberg with the sad feeling that he had been living in a kind of bubble in Huntsville, Alabama, that his thoughts and feelings about the past were those of a German decades ago, but not those of todayâ''s Germans. In the 1950s and 1960s, whenever the subject of Nazism and Hitler came up, people invariably would mention the good things he didâ''the Autobahns, the railways running on time. As late as the 1980s, a friend of mine formulated a general law of conversing with pre-war Germans: one way or another, he found, a vintage-1930s German would always make three points:

1. They were crazy, crazy times

2. It wasnâ''t all bad--the Nazis did some good things too

3. Nobody could have known everything that was going on

Dannenberg didnâ''t quite get to Point 3 (perhaps because he had already conceded, before we got to talking about Nazism in general terms, that his work during World War II took him to Mittelwerk a number of times). But he did bring up the subject of Michael Neufeldâ''s von Braun biography, in which the great German rocketeer is portrayed as a man who had made a pact with the devil (Hitler). Dannenberg said, with an air of distaste, that Neufeld just wanted to sell books. Here I must disagree, at least in part. Neufeldâ''s book is a work of superb, meticulous scholarship, and certainly does not cater to any mob.

No doubt Dannenberg didnâ''t like the part about the pact with the devil, and frankly, Iâ''m not sure I do either. Werner von Braun, if I read him right, was a typical upper-class conservative of his day who disliked Hitler and the Nazis, to be sure, but also was a loyal and patriotic German who would have preferred for his country to prevail. He didnâ''t build rockets for Hitler because he was making a pact with the devil; he did it because he wanted Germany to win the war.

The real pact with the devil occurred when the U.S. government enlisted von Braunâ''s services without exercising proper critical supervision. Von Braun himself, though an honorary member of the Waffen SS, stood up to the Nazis, showing amazing courage on one occasion. But in allowing von Braun to put war criminal Rudolph in charge of the Saturn program, the United States allowed the grand Moon landing to be permanently tainted in the eyes of posterity.

NOTE: The Wall Street Journal, in a remembrance published on Monday, did not make the error about Dannenberg's party membership.

NASA satellite crashes into ocean in failed launch

NASA's Orbiting Carbon Observatory failed to make it to orbit earlier today. It launched off from Vandenberg Air Force Base in California on a 4-stage Taurus XL rocket. The protective cover (a nose cone) failed to separate and, even though all the rocket stages worked, the extra weight meant that the satellite didn't reach orbit. It fell back to Earth and landed somewhere in the ocean near Antarctica.

The failure of the $278 million mission is a setback for carbon monitoring efforts from space. It is also the second of eight Taurus XL missions to fail. Orbital Sciences Corporation built both the satellite and the rocket.

NASA says it will convene a committee to investigate the failure.

Spectrum author looks at the death of plasma, and why he guessed wrong on the the future of projection TV

IEEE Spectrum author and Gartner analyst Paul O'Donovan this week took a look at his 2006 bets in the television display horse race (he picked plasma to lose) and made some new predictions, including the imminent demise of rear-projection television. Here's what he had to say.

"Plasma sales were around 9.5 million units in 2006, representing around 6 percent of the worldwide sales for all TVs. We estimate this has only grown to 8 percent now in 2009 and with the recession starting to bite on consumer pockets we expect pasma to fade out of production within five to seven years.

"The news from Pioneer [getting out of the television business] was unexpected but not surprising. Although Pioneerâ''s plasma displays were amongst the highest quality on the market, the decline in the total sales of all TVs technologies clearly made it difficult for Pioneer to sustain reasonable profit margins and production volumes. The same situation is behind Vizioâ''s decision to cease sales of plasma. For me the Pioneer decision was unexpected but not surprising; Vizio's pulling out of the plasma TV market was indeed something I was expecting. The writing was on the wall for plasma before the recession appeared, it has effectively brought forward the inevitable decline of plasma as a mainstream consumer product.

"I was also right about LCD TVs being dominant, but that wasnâ''t so hard to predict. I called rear projection TVs wrong. Shame that, I thought they had a good chance in the super big screen sizes over 50 inches if they were cheap enough, but the manufacturers turned away from rear projection TV in droves and now it looks like this is going to be the last year of rear projection TV production."

The Week of the Nanobots

It has indeed been a week of nanobots. From the realm of research, we have the recent work of Nadrian Seeman developing a two-armed nanorobotic device that can manipulate molecules within a device built from DNA.

On the more speculative side of things we got Ray Kurzweil revealing that his aim is to resurrect his father from the grave with the benefit of nanobots.

Then on theâ'¿pause while struggling for a descriptive wordâ'¿polemical plain there was the business perspective of molecular manufacturing (MNT) and nanobots and the view of computer scientists on the same. And finally, the viewpoint of a physicist.

Even this blogger was inclined to discuss nanobots this week.

Why the sudden interest? In part, it seems to stem from the new president and Foresight Nanodot blogger, Dr. J. Storrs Hall. Dr. Hall takes over as president of the Foresight Institute from Pearl Chin, who held the position for a year-and-a-half and was not a regular blogger on the Nanodot blog. It is also the rather strange decision of Kurzweil to reveal to a Rolling Stone interviewer his Freudian need to pursue the Singularity.

But if I may apply some dime-store psychology to this sudden surge of interest, it might be due to things just being so terrible at the moment were in. It is far better to imagine some day in the future when we can use nanobots to bring our lost loved ones back to life, or to press the button on our home-installed nanofactory that says â''Ferrariâ''.

We can dream about that or face the grim realities of the now.

Computers Now Regularly Beat Humans at Go

Photo: Deerbourne

On Saturday I got to witness something that has only happened a handful of times: a computer beating a professional at a game of go. Now, to be fair, the game was rigged to give the the program, Many Faces of Go, a huge advantageâ''it was allowed to place seven stones on the board before the match startedâ''and the human still only lost by 4.5 points. But the fact that a computer program can win at any handicap level means that it's finally possible to make quantitative estimates about how long we have until the best human falls prey to the Deep Blue of go.

The match I watched took place at the annual meeting of the American Association for the Advancement of Sciences, held in Chicago over the weekend. It was part of a talk about computer science and games. Humans have been playing go in Asia for three to four-thousand years, and until recently, even amateurs could easily beat the best software out there. Back in 2007, Feng - Hsiung Hsu wrote "Cracking Go" for IEEE Spectrum, in which he discussed how brute force computing techniques were finally starting to make progress in computer go.

That progress finally resulted in the first computer go win in August 2008. In addition to Many Faces of Go, programs named Crazy Stone and MoGo have combined to win five more high handicap games since then, including the one in Chicago. These victories allowed the organizer, Robert A. Hearn, a researcher at Dartmouth College, to make a back-of-the-envelope calculation: if Moore's law and improvements to go algorithms continue at the current pace, he predicts that computers will be able to beat the best players in the world (in an even game) in about 28 years (roughly in agreement with an informal poll of the computer go mailing list, where the average estimate was 20 years).

In this match, the software ran on a 32-node cluster and played out about 7 million complete games for each move it had to make. As the game concluded, James Kerwin, the human player noted that he lost the game when he made a blunder in the middle of the board, which cost him a few points. This made me wonder about the differences in the kinds of mistakes that computers and humans each make. "Humans are more likely to miss an entire branch and variation of gameplay," Hearn told me, but programs often make moves that no human player would ever make.

IEEE Spectrum editor Philip Ross has chronicled this stylistic difference in the (computationally) easier game of Chess: from the near-parity of human-computer chess six years after Deep Blue's much publicized victory, to Kasparov's anti-computer strategies just five years ago, to the anti-anti-computer strategies the program Fritz began perfecting in 2005. Most chess programs rely on tree searchers, where all possible moves are searched out to a certain depth, and then the resulting positins are evaluated. But as Hsu's article explains, go has many more legal positions, which make these types of searches exponentially more difficult. Checkers, for instance, which was formally solved in 2007, has on the order of 10^20 legal positions possible; chess has on the order of 10^44. For go, the 19x19 board has on the order of 10^171 positions. In addition, go scores are only tallied at the end of a game, so it's very hard to determine who is ahead after a given number of moves.

To overcome this limitation, top go games have increasingly turned to Monte Carlo methods. In a Monte Carlo search, the computer plays out lots of random games all the way to their conclusion. Some of these games share intermediate configurations (called nodes). At each node, the program keeps track of the winning percentage and the number of games that have passed through it. This allows the software to quickly identify the most useful nodes for further exploration. Such a technique also scales better with parallel processing, because more cores can simply play out more random games. Hsu was cautiously optimistic of Monte Carlo methods in 2007, but he wrote that, "My hunch, however, is that they wonâ''t play a significant role in creating a machine that can top the best human players in the 19-by-19 game." Now, however, it looks like Monte Carlo methods are the future of computer go.

The researchers on the panel also discussed the ways that go can impact computer science. Elwyn Berlekamp has applied game theory to go and proved that certain configurations near the end of games are possible to solve analytically. He's now working to understand where the reductionism of Western science (which can analytically solve late-stage go games) meets up with the more holistic approach of Eastern cultures. Berlekamp also developed a variation of the game, which he calls "coupon go" that gives researchers a way to probe the quantitative value of any given moves. It's worth checking out.

Kearns is more interested in undecidable problems, where no conceivable algorithm could ever be designed that's capable of always giving the right answer. While he and his team have created several artificial games that qualify as unsolvable, he also pointed to another go variation that might qualify: a bizarre game called Rengo Kriegspiel where players inherently have incomplete information.

I'm looking forward to watching programs get better and better at go. Maybe I should learn to play before it's an obsolete human skill.

NASA, ESA Decide on Jupiter Over Saturn for Next Big Planetary Missions

The American and European space agencies have come to an agreement on a long-term plan to explore the moons of Jupiter. A competing scheme to visit the moons of Saturn was set aside temporarily to make way for the Jovian exploration.

In a statement released yesterday, NASA said that its representatives and those of the European Space Agency (ESA) had decided to move forward on planning for a pair of missions to the Jupiter system to be launched in 2020.

The Americans would send a craft to ultimately investigate the moon Europa, which scientists believe is covered in liquid water beneath an icy exterior. The European probe would eventually explore Ganymede, the largest moon in the solar system, which may also have a subsurface ocean.

Both missions, to take place simultaneously, would take six years to reach Jupiter; then they would begin about two years of sailing through the Jovian system, inspecting a variety of moons before settling into orbits around their respective final targets.

The twin missions have not been approved by the governments that operate the two space agencies, and the missions' budgets have not been determined. But experts have put a price of between US $2.5 to $3 billion on the overall plan, according to media reports.

The joint statement stated that these 'outer planet flagship missions could eventually answer questions about how our solar system formed and whether life exists elsewhere in the universe'. It noted that the proposed space flights, known together as the Europa Jupiter System Mission, were the result of a great deal of research by NASA and ESA engineers and scientists under the umbrella of a joint working group. And it added that much more detailed studies will be required before the plan officially moves forward.

The decision to favor Jupiter with attention first, however, should not be seen as a snub to those who had pushed for missions to Saturn's moons, which also have distinctly interesting scientific characteristics, a leading NASA official observed.

"The decision means a win-win situation for all parties involved," said Ed Weiler, associate administrator for NASA's Science Mission Directorate in Washington. "Although the Jupiter system mission has been chosen to proceed to an earlier flight opportunity, a Saturn system mission clearly remains a high priority for the science community."

A spokesperson for ESA stated that the joint endeavor could be a "landmark of 21st-century planetary science."

"What I am especially sure of is that the cooperation across the Atlantic that we have had so far and we see in the future, between America and Europe, NASA and ESA, and in our respective science communities is absolutely right," said David Southwood, ESA Director of Science and Robotic Exploration. "Let's get to work."

The Death of Plasma TV: You Read it Here First

j0364682.gifBack in 2006 IEEE Spectrum author Paul Oâ''Donovan predicted the death of plasma television, in his article â''Goodbye, CRTâ''. He wrote, â''A plasma TV wonâ''t be the last TV you buy. Hereâ''s why: itâ''s got limited longevity, itâ''s power hungry, and itâ''s heavy,â'' and went on to detail the inherent weaknesses of the technology.

By 2010, he predicted, â''LCD TVs will dominate in sheer numbers, though mostly at the smaller screen sizes. Projection TV production will grow steadily, with 14 million manufactured in 2010. Meanwhile, plasma technology will gradually die.â''

At the time, it seemed like a pretty bold statement. Plasma TV sales were surging; in the third quarter of 2006, as Oâ''Donovanâ''s article went to press, plasma TV sales were up 140 percent compared with the previous year (counting by units).

But now, it seems, plasma is indeed on its deathbed. Last year, according to the Consumer Electronics Association, manufacturers shipped a total of 32.74 million sets to retailers in the U.S. LCD TV shipments totaled 23.76 million (73 percent); plasma came in at 3.55 million (11 percent). And the news for plasma just keeps getting worse. This month, Pioneer, manufacturer of one of the best plasma displays out there, announced that it is getting out of the TV business altogether. Low-cost TV maker Vizio also has stopped manufacturing plasma televisions and has reportedly almost sold out of its inventory.

Today, just LG, Samsung, and Panasonic are still in the plasma business. Panasonic, long convinced of a plasma future, made huge investments in plasma display manufacturing, and is likely to continue to support the technology for years to come. And indeed, the company continues to push the technology forward, introducing at this yearâ''s Consumer Electronics Show an ultra-thin model (2.5 cm thick), a plasma TV capable of displaying 3-D images, low-power designs, and a prototype of a 150-inch plasma display. These efforts are likely to keep plasma a viable choice for bars, airports, and billboards; flying off retail shelves into homes, not so much.

The Problem with Public Engagement in Nanotech

After reading TNTLogâ''s experience with a European Commission funded public engagement exercise between nanoscientists in the UK and their lay people neighbors, I have a theory on where these public engagement exercises usually go wrong.

The problem is not with the lay people, and the problem is not with the nanoscientists, the problem starts with the mediators who have accepted public funds to somehow measure the exchange between the two.

How this interchange is measured is anyoneâ''s guess. I certainly have no idea, but I suppose that is the alchemy of the social scientist; itâ''s probably better that we donâ''t know.

But after the measuring, we certainly see the results of their particular form of abracadabra: the bone chilling scare screed about how the public expect swarms of nanobots to overrun their neighborhood, or how nanobots will be spying on them as they use the bathroom.

I have a suggestion (I offer this with the understanding that it will be completely ignored), letâ''s increase the number of these public engagement exercises, but at the same time letâ''s eliminate the intermediary between the scientists and the public. And letâ''s absolutely abolish the dreaded reports that are produced afterwards.

I would be satisfied just knowing that four or five scientists spent an hour talking to and answering questions from a room full of lay people.

Nanorobot with Two Arms is Better Than Single-Armed One

Professor Nadrian Seeman at New York University has emerged as one of the key researchers in bringing the hopes for molecular nanotechnology (MNT) closer to reality.

After having completed his work in 2006 of creating a nanorobot with a single arm, Seeman has taken it a step further by developing a two-armed nanorobotic device that can manipulate molecules within a device built from DNA.

If we can loosely define MNT as we humans being able to make macro-scale things atom-by-atom or molecule-by-molecule with the assistance of computers designing and then assembling materials and structures by placing atoms exactly where we want them to go, then Seeman has managed to get largely there. Except for the part about making macroscale objects, of course.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More