Tech Talk iconTech Talk

Computers Now Regularly Beat Humans at Go

Photo: Deerbourne

On Saturday I got to witness something that has only happened a handful of times: a computer beating a professional at a game of go. Now, to be fair, the game was rigged to give the the program, Many Faces of Go, a huge advantageâ''it was allowed to place seven stones on the board before the match startedâ''and the human still only lost by 4.5 points. But the fact that a computer program can win at any handicap level means that it's finally possible to make quantitative estimates about how long we have until the best human falls prey to the Deep Blue of go.

The match I watched took place at the annual meeting of the American Association for the Advancement of Sciences, held in Chicago over the weekend. It was part of a talk about computer science and games. Humans have been playing go in Asia for three to four-thousand years, and until recently, even amateurs could easily beat the best software out there. Back in 2007, Feng - Hsiung Hsu wrote "Cracking Go" for IEEE Spectrum, in which he discussed how brute force computing techniques were finally starting to make progress in computer go.

That progress finally resulted in the first computer go win in August 2008. In addition to Many Faces of Go, programs named Crazy Stone and MoGo have combined to win five more high handicap games since then, including the one in Chicago. These victories allowed the organizer, Robert A. Hearn, a researcher at Dartmouth College, to make a back-of-the-envelope calculation: if Moore's law and improvements to go algorithms continue at the current pace, he predicts that computers will be able to beat the best players in the world (in an even game) in about 28 years (roughly in agreement with an informal poll of the computer go mailing list, where the average estimate was 20 years).

In this match, the software ran on a 32-node cluster and played out about 7 million complete games for each move it had to make. As the game concluded, James Kerwin, the human player noted that he lost the game when he made a blunder in the middle of the board, which cost him a few points. This made me wonder about the differences in the kinds of mistakes that computers and humans each make. "Humans are more likely to miss an entire branch and variation of gameplay," Hearn told me, but programs often make moves that no human player would ever make.

IEEE Spectrum editor Philip Ross has chronicled this stylistic difference in the (computationally) easier game of Chess: from the near-parity of human-computer chess six years after Deep Blue's much publicized victory, to Kasparov's anti-computer strategies just five years ago, to the anti-anti-computer strategies the program Fritz began perfecting in 2005. Most chess programs rely on tree searchers, where all possible moves are searched out to a certain depth, and then the resulting positins are evaluated. But as Hsu's article explains, go has many more legal positions, which make these types of searches exponentially more difficult. Checkers, for instance, which was formally solved in 2007, has on the order of 10^20 legal positions possible; chess has on the order of 10^44. For go, the 19x19 board has on the order of 10^171 positions. In addition, go scores are only tallied at the end of a game, so it's very hard to determine who is ahead after a given number of moves.

To overcome this limitation, top go games have increasingly turned to Monte Carlo methods. In a Monte Carlo search, the computer plays out lots of random games all the way to their conclusion. Some of these games share intermediate configurations (called nodes). At each node, the program keeps track of the winning percentage and the number of games that have passed through it. This allows the software to quickly identify the most useful nodes for further exploration. Such a technique also scales better with parallel processing, because more cores can simply play out more random games. Hsu was cautiously optimistic of Monte Carlo methods in 2007, but he wrote that, "My hunch, however, is that they wonâ''t play a significant role in creating a machine that can top the best human players in the 19-by-19 game." Now, however, it looks like Monte Carlo methods are the future of computer go.

The researchers on the panel also discussed the ways that go can impact computer science. Elwyn Berlekamp has applied game theory to go and proved that certain configurations near the end of games are possible to solve analytically. He's now working to understand where the reductionism of Western science (which can analytically solve late-stage go games) meets up with the more holistic approach of Eastern cultures. Berlekamp also developed a variation of the game, which he calls "coupon go" that gives researchers a way to probe the quantitative value of any given moves. It's worth checking out.

Kearns is more interested in undecidable problems, where no conceivable algorithm could ever be designed that's capable of always giving the right answer. While he and his team have created several artificial games that qualify as unsolvable, he also pointed to another go variation that might qualify: a bizarre game called Rengo Kriegspiel where players inherently have incomplete information.

I'm looking forward to watching programs get better and better at go. Maybe I should learn to play before it's an obsolete human skill.

NASA, ESA Decide on Jupiter Over Saturn for Next Big Planetary Missions

The American and European space agencies have come to an agreement on a long-term plan to explore the moons of Jupiter. A competing scheme to visit the moons of Saturn was set aside temporarily to make way for the Jovian exploration.

In a statement released yesterday, NASA said that its representatives and those of the European Space Agency (ESA) had decided to move forward on planning for a pair of missions to the Jupiter system to be launched in 2020.

The Americans would send a craft to ultimately investigate the moon Europa, which scientists believe is covered in liquid water beneath an icy exterior. The European probe would eventually explore Ganymede, the largest moon in the solar system, which may also have a subsurface ocean.

Both missions, to take place simultaneously, would take six years to reach Jupiter; then they would begin about two years of sailing through the Jovian system, inspecting a variety of moons before settling into orbits around their respective final targets.

The twin missions have not been approved by the governments that operate the two space agencies, and the missions' budgets have not been determined. But experts have put a price of between US $2.5 to $3 billion on the overall plan, according to media reports.

The joint statement stated that these 'outer planet flagship missions could eventually answer questions about how our solar system formed and whether life exists elsewhere in the universe'. It noted that the proposed space flights, known together as the Europa Jupiter System Mission, were the result of a great deal of research by NASA and ESA engineers and scientists under the umbrella of a joint working group. And it added that much more detailed studies will be required before the plan officially moves forward.

The decision to favor Jupiter with attention first, however, should not be seen as a snub to those who had pushed for missions to Saturn's moons, which also have distinctly interesting scientific characteristics, a leading NASA official observed.

"The decision means a win-win situation for all parties involved," said Ed Weiler, associate administrator for NASA's Science Mission Directorate in Washington. "Although the Jupiter system mission has been chosen to proceed to an earlier flight opportunity, a Saturn system mission clearly remains a high priority for the science community."

A spokesperson for ESA stated that the joint endeavor could be a "landmark of 21st-century planetary science."

"What I am especially sure of is that the cooperation across the Atlantic that we have had so far and we see in the future, between America and Europe, NASA and ESA, and in our respective science communities is absolutely right," said David Southwood, ESA Director of Science and Robotic Exploration. "Let's get to work."

The Death of Plasma TV: You Read it Here First

j0364682.gifBack in 2006 IEEE Spectrum author Paul Oâ''Donovan predicted the death of plasma television, in his article â''Goodbye, CRTâ''. He wrote, â''A plasma TV wonâ''t be the last TV you buy. Hereâ''s why: itâ''s got limited longevity, itâ''s power hungry, and itâ''s heavy,â'' and went on to detail the inherent weaknesses of the technology.

By 2010, he predicted, â''LCD TVs will dominate in sheer numbers, though mostly at the smaller screen sizes. Projection TV production will grow steadily, with 14 million manufactured in 2010. Meanwhile, plasma technology will gradually die.â''

At the time, it seemed like a pretty bold statement. Plasma TV sales were surging; in the third quarter of 2006, as Oâ''Donovanâ''s article went to press, plasma TV sales were up 140 percent compared with the previous year (counting by units).

But now, it seems, plasma is indeed on its deathbed. Last year, according to the Consumer Electronics Association, manufacturers shipped a total of 32.74 million sets to retailers in the U.S. LCD TV shipments totaled 23.76 million (73 percent); plasma came in at 3.55 million (11 percent). And the news for plasma just keeps getting worse. This month, Pioneer, manufacturer of one of the best plasma displays out there, announced that it is getting out of the TV business altogether. Low-cost TV maker Vizio also has stopped manufacturing plasma televisions and has reportedly almost sold out of its inventory.

Today, just LG, Samsung, and Panasonic are still in the plasma business. Panasonic, long convinced of a plasma future, made huge investments in plasma display manufacturing, and is likely to continue to support the technology for years to come. And indeed, the company continues to push the technology forward, introducing at this yearâ''s Consumer Electronics Show an ultra-thin model (2.5 cm thick), a plasma TV capable of displaying 3-D images, low-power designs, and a prototype of a 150-inch plasma display. These efforts are likely to keep plasma a viable choice for bars, airports, and billboards; flying off retail shelves into homes, not so much.

The Problem with Public Engagement in Nanotech

After reading TNTLogâ''s experience with a European Commission funded public engagement exercise between nanoscientists in the UK and their lay people neighbors, I have a theory on where these public engagement exercises usually go wrong.

The problem is not with the lay people, and the problem is not with the nanoscientists, the problem starts with the mediators who have accepted public funds to somehow measure the exchange between the two.

How this interchange is measured is anyoneâ''s guess. I certainly have no idea, but I suppose that is the alchemy of the social scientist; itâ''s probably better that we donâ''t know.

But after the measuring, we certainly see the results of their particular form of abracadabra: the bone chilling scare screed about how the public expect swarms of nanobots to overrun their neighborhood, or how nanobots will be spying on them as they use the bathroom.

I have a suggestion (I offer this with the understanding that it will be completely ignored), letâ''s increase the number of these public engagement exercises, but at the same time letâ''s eliminate the intermediary between the scientists and the public. And letâ''s absolutely abolish the dreaded reports that are produced afterwards.

I would be satisfied just knowing that four or five scientists spent an hour talking to and answering questions from a room full of lay people.

Nanorobot with Two Arms is Better Than Single-Armed One

Professor Nadrian Seeman at New York University has emerged as one of the key researchers in bringing the hopes for molecular nanotechnology (MNT) closer to reality.

After having completed his work in 2006 of creating a nanorobot with a single arm, Seeman has taken it a step further by developing a two-armed nanorobotic device that can manipulate molecules within a device built from DNA.

If we can loosely define MNT as we humans being able to make macro-scale things atom-by-atom or molecule-by-molecule with the assistance of computers designing and then assembling materials and structures by placing atoms exactly where we want them to go, then Seeman has managed to get largely there. Except for the part about making macroscale objects, of course.

Al Gore calls on all scientists to fight against global warming

Speaking to a packed audience at the annual meeting of the American Association for the Advancement of Science currently underway in Chicago, former US vice president Al Gore called on all scientists to help fight against global warming, and for every scientist to use his or position of respect and trust among fellow citizens and neigbors to raise awareness of the problem the world is facing.

"This is no time to sit back," he said. "We as a species must make a decision..... Continuing on our present course will threaten human civilization."

Gore ascribed the problem we are facing today to "our absurd overdependence on carbon-based fuel."

He was preaching to the choir. Most scientists believe that human activity has led to climate change.

But global warming as a result of human action is still controversial in some quarters. Gore, who won a Nobel peace prize for his work on climate change, compared the situation to the situation faced by Copernicus and Galileo when they sought to overturn the Earth-centered Ptolemaic view of the cosmos. Instead, Copernicus espoused that the Earth orbited the Sun. Four hundred years ago, Galileo's telescope showed observations that bore out Copernicus, and were another "inconvenient truth," said Gore.

He mused that one reason that some may find it hard to believe that humans have produced climate change is because when we look up, the sky appears to be a vast expanse. It seems absurd to assume that human beings can have any impact on this limitless blue sky.

Gore got a standing ovation from the scientists present.

A Funeral for Analog TV

digtv120-thumb.gifAnalog TV may have gotten a stay of execution this week, with President Barack Obamaâ''s signing of a bill to delay the switch to 12 June, but at least one funeral ceremony will go on as scheduled. To be held on Tuesday, 17 February, at the Berkeley Art Museum, the â''Funeral for Analog TVâ'' will feature futurist Paul Saffo spelling out â''the sordid history of the Analog TV Signalâ''s lifeâ'' and author Bruce Sterling delivering the eulogy. The event announcement/obituary points out that Analog Television is survived by its wife, Digital Television (a May-December romance, I suppose) and its second cousin, Internet Television.

An Unlucky Friday the Thirteenth for RF Health Research


Today, the Motorola ElectroMagnetic Energy Research Laboratory in Plantation, Florida, is officially closing its doors. According to Microwave News, the company has been a world leader on research in RF radiation safety since 1993, when cell phones were first accused of causing brain tumors.

It wasnâ''t a big laboratoryâ''13 engineers and scientists at its peakâ''but it had a big impact. Joseph Morrissey, a regular contributor to Spectrum (see â''The Cell Phone and the Hearing Aidâ'' and â''End the Mobile Phone Ban in Hospitalsâ''), and one of the researchers laid off, says, â''Members of the group took lead roles on almost every relevant standard in the field, including IEEE C95.1-2005

(human SAR exposure limits), IEEE 1528-2003 and IEC 62209-2005 (Wireless

device SAR testing procedures), and ANSI C63.19-2007 (mobile phone/

hearing aid compatibility). No meeting of the Bioelectromagnetics Society was complete without several members of the lab participating

and presenting new information. Researchers at the laboratory contributed regularly to IEEE publications, like the IEEE Transactions in Microwave Theory and Techniques and the IEEE Transactions on Vehicular Technology.

In recent years various restructurings in response to lowered company earnings thinned the group. And after today, it will be gone completely, leaving a big gap in the field of radiofrequency exposure research.

Nanotechnology by Any Other Name...Is Something Else Entirely

One of the issues I discussed in my very first post on Tech Talk was how advocates for Eric Drexlerâ''s vision for molecular nanotechnology (MNT) were trying to wrestle back ownership of the term â''nanotechnologyâ''. And in my most recent post, I bemoaned the seemingly endless parade of wrong-headed definitions of nanotechnology.

It appears that definition is just an insurmountable problem for nanotechnology. The latest example of this is a response on the Foresight Instituteâ''s Nanodot blog that takes issue with Richard Jonesâ'' recent blog entry that was first published in Natureâ''s Nanotechnology publication, entitled â''The Economy of Promisesâ''.

The point of Jonesâ'' pieceâ''at least to my readingâ''is that over hyping the near-term potential of nanotechnology does more harm than good, not only to investors who believe the malarkey but also to nanoscientists, whoâ''although they should know betterâ''are drawn into believing the hype themselves.

In broadening his point, Jonesâ'' refers to Alfred Nordmannâ''s contention in If and then: a critique of speculative nanoethics that â''speculations on the ethical and societal implications of the more extreme extrapolations of nanotechnology serve implicitly to give credibility to such visions.â'' To illustrate this phenomenon Jones uses the Foresight Institute and the Center for Responsible Nanotechnology as examples.

The Foresight Institute was not going to take this perceived attack lying down and claim that the real problem is that material scientists co-opted their term â''nanotechnologyâ''.

As the argument seems to go, Drexler popularized the term nanotechnology in his book Engines of Creation, and so when the general public heard that thousands of scientists were working on â''nanotechnologyâ'' of course they thought that table-top factories and nanobots were just around the corner. This is why nanotechnology has failed in its promise.

If I follow that logic, all would be well with the world if material scientists described their work at engineering and manipulating materials on the nanoscale to bring about novel properties as anything but â''nanotechnologyâ''.

And no doubt if this had held true, all the funding that now gets funneled into national nanotechnology initiatives around the world would either not exist at all or be aimed at quite different purposes.

One result of these different purposes might have been that today we would have much better computer-generated animation of how a table-top factory might work someday.

I swarm, you swarm, we all swarm for millimeter-sized autonomous robots

If you thought the cyborg moth had cargo problems, consider the millimeter-sized I-SWARM microbot presented at ISSCC by researchers from the University of Barcelona, a ladybug-sized piece of machinery that has to lug around its own control electronics, communication electronics, capacitors and little tiny solar panels. It weighs 70 mg which is about the weight of a standard multivitamin.

First, an important question: why are we making fake ants?

Many applications would be good for tiny robot swarms, some of which were enumerated back in October on our sister blog, Automaton.

First, if you send a swarm of ten thousand to do some task, and one thousand die, chances are your project will be unaffected, so the system is inured from failure by redundancy.

Second, you can do things that are possible only through teamwork. Astrobiologists love the idea of swarming robots because ideally, they could build themselves into a bridge, or build themselves into a ladder or a pile, anything the task required.

Finally, if you can make them on the cheap and out of silicon, you'll be able to fab them like microchips, which means that you might end up with similarly small per-unit costs. Pretty good for an autonomous robot.

But you can't build one of these little suckers with commercial off the shelf electronics, which is why Raimon Casanova Mohr created the first system-on-chip specifically for an autonomous mobile microbot. The microbot can be optically programmed kind of the way you would beam a business card to a Blackberry. It includes all necessary electronics except for 3 capacitors: Essentially, like tiny ants, these I-SWARMS can move autonomously, process limited "sensory" data, make decisions based upon that data, and then communicate amongst themselves to do whatever it is that they are programmed to do. Their programmable behavior can be changed on the fly.

This is part of an EU-funded project though it's unclear which. Automaton has it as one of two sister projects, both funded under EU's Seventh Framework Programme spanning from January 2008 to 2013. An article from the UK Register reports that it was funded earlier by the EU Information Society Technologies (IST) Sixth Framework Programme (starting in 2003).

Mohr showed pictures of the setup in his lab: the swarm's "work area" is a space about the size of a piece of letter paper enclosed in plexiglass like a penalty box. It's uniformly illuminated by a high intensity lamp, which powers a 3.9 square millimeter solar cell array on the I-SWARM, which in turn generates 1 mW for the system-on-chip and half of that for its "body:" 3 piezoelectric legs driven by square waveforms at its resonance frequency, 32.86 kHz. The legs can go forward, backward or spin. The system on chip is 2.6 mm by 2.6 mm, about the size of a sunflower seed. Most of these components, including two little capacitors, are assembled on a flexible printed circuit board that does double duty as the bug's backbone. But the leftovers that are stored in the capacitors aren't even enough to retain the bug's programming, so it has to be reprogrammed every time it starts. The process takes around 45 minutes.

The programming is done by an IR projector that sits next to the HI-lamp.

The I-SWARM has an optical chip for short-range IR communication with 4 pairs of LEDs/photodiodes, one for each side of the chip. The information is

sent by LED, and received by photodiode. This is how it positions itself relative to its herd, and to the projector. The IR projector also confirms the exact position of each robot over the working area.

The robots are very cute--after the ISSCC presentation, I overheard one engineer referring to them as "little animals" which I found telling. Sadly, there was no video, because apparently the solar PV panels were on the fritz. But it's not the end of these guys. We're one year into the second phase, and I'm interested to see what awaits us in 2013. Meanwhile, check out Automaton for the old videos.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More