A Birthday For Pixar’s RenderMan Software

Pixar is celebrating 25 years of making movies using RenderMan software, but the technology dates back much further.

1 min read
A Birthday For Pixar’s RenderMan Software

RenderMan, the set of computer graphics tools behind much of today’s computer animation, is celebrating a birthday this year. RenderMan changed the animated movie industry, because it makes it far easier for computers to produce realistic images, not just boxes and angles. Evidence of its impact? The folks that produced 19 of the last 21 movies that won the Academy Award for Visual Effects used RenderMan technology.

Pixar is counting RenderMan’s age as 25; indeed, that’s when Pixar launched it as a product. But the lead architect on the design of the product, Pat Hanahan, wasn’t starting from scratch. The technology that made RenderMan work really dates back to 1973 when Ed Catmull, a student at the University of Utah, came up with the Z-buffer, the first algorithm that keeps track of the depth of every pixel on the screen. Z-buffer makes it possible for computers to generate complex images, and is now built into computers and video game machines. “I get zip for that,” Catmull told me when I interviewed him in 2001, “because at that time patenting wasn’t part of our thought processes.”

Catmull, now president of Walt Disney Animation Studios and Pixar Animation, isn’t the only computer scientist to contribute to RenderMan's DNA. When the Academy of Motion Picture Arts and Sciences Honored the technology with an Oscar in 2001, it recognized three: Catmull, Loren Carpenter (now chief scientist at Pixar) and Rob Cook (now retired). I described their individual paths to developing computer graphics technology that came together in RenderMan in IEEE Spectrum’s April 2001 article “And the Oscar Goes To…” (The trio did indeed get an Oscar for RenderMan. It was the very first Oscar ever given to developers of software.)

Follow me on Twitter @TeklaPerry

Image: Pixar


Correction made 10/7/13.

The Conversation (0)

The Future of Deep Learning Is Photonic

Computing with light could slash the energy needs of neural networks

10 min read

This computer rendering depicts the pattern on a photonic chip that the author and his colleagues have devised for performing neural-network calculations using light.

Alexander Sludds

Think of the many tasks to which computers are being applied that in the not-so-distant past required human intuition. Computers routinely identify objects in images, transcribe speech, translate between languages, diagnose medical conditions, play complex games, and drive cars.

The technique that has empowered these stunning developments is called deep learning, a term that refers to mathematical models known as artificial neural networks. Deep learning is a subfield of machine learning, a branch of computer science based on fitting complex models to data.

Keep Reading ↓ Show less