AIs Have Mastered Chess. Will Go Be Next?

Randomness could trump expertise in this ancient game of strategy

11 min read
A Go board board covered with a grid of closely spaced lines and bean-size black and white stones.
Photo: Dan Saelinger; Prop Stylist: Dominique Baynes

Chou Chun-hsun, one of the world’s top players of the ancient game of Go, sat hunched over a board covered with a grid of closely spaced lines. To the untrained eye, the bean-size black and white stones scattered across the board formed a random design. To Chou, each stone was part of a complex campaign between two opposing forces that were battling to capture territory. The Go master was absorbed in thought as he considered various possibilities for his next move and tried to visualize how each option would affect the course of the game. Chou’s strategy relied on a deep understanding of Go, the result of almost 20 years of painstaking study.

Although Chou looked calm, he knew he was in big trouble. It was 22 August 2009, and Chou was matched against a Go-playing computer running Fuego, an open-source program that we developed at the University of Alberta, in Canada, with contributions from researchers at IBM and elsewhere. The program was playing at the level of a grand master—yet it knew nothing about the game beyond the basic rules.

Keep reading...Show less

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions

Video Friday: Turkey Sandwich

Your weekly selection of awesome robot videos

4 min read
A teleoperated humanoid robot torso stands in a kitchen assembling a turkey sandwich from ingredients on a tray

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

Keep Reading ↓Show less

New AI Speeds Computer Graphics by Up to 5x

Neural rendering harnesses machine learning to paint pixels

5 min read
Four examples of Nvidia's Instant NeRF 2D-to-3D machine learning model placed side-by-side.

Nvidia Instant NeRF uses neural rendering to generate 3D visuals from 2D images.

NVIDIA

On 20 September, Nvidia’s Vice President of Applied Deep Learning, Bryan Cantanzaro, went to Twitter with a bold claim: In certain GPU-heavy games, like the classic first-person platformer Portal, seven out of eight pixels on the screen are generated by a new machine-learning algorithm. That’s enough, he said, to accelerate rendering by up to 5x.

This impressive feat is currently limited to a few dozen 3D games, but it’s a hint at the gains neural rendering will soon deliver. The technique will unlock new potential in everyday consumer electronics.

Keep Reading ↓Show less

Bring Physics-Based AI Into Your Research With NVIDIA Modulus

Learn how to use physics-based machine learning to accelerate science and engineering simulations

1 min read
Nvidia

High-fidelity simulations in science and engineering are widely used in industrial, seismic, weather/climate, and life sciences applications. However, traditional simulations remain computationally expensive and impractical for real time applications. They are discretization dependent, meaning they do not easily assimilate either measured or synthetic data from various sources. Due to rapid developments in AI for science and engineering problems, machine learning has assumed an important complementary role in addressing the critical gaps in the traditional methods.

NVIDIA Modulus is a physics-based machine learning platform that has several state-of-the-art network architectures and data, as well as PDE driven AI techniques to solve real world science and engineering problems. Various performance features for both single and multi-GPU/node systems, plus connectivity with several NVIDIA toolkits and technologies are available in Modulus. Examples and documentation are provided to ensure seamless learning for students while the researchers can customize the framework through various APIs.

Introduction to NVIDIA Modulus: A Physics-ML Framework for Research

This webinar will introduce you to applications of machine learning, various domains of science and engineering, as well as a deep dive into the code implementation, training, solution, and visualization aspects of physics-ML workflow.

Register now for this free webinar!

Keep Reading ↓Show less