Aping Faces

Weta Digital creates a more expressive chimp for Rise of the Planet of the Apes

2 min read
Aping Faces

It took an alien warrior, a ring-obsessed monster, and a skyscraper-climbing gorilla to make a more expressive chimp.

When Rise of the Planet of the Apes hits theaters Aug. 5, audiences will get a taste of the most advanced facial capture technology to be used in film.

Building on the motion-capture effects they developed for Avatar, King Kong, and the Lord of the Rings trilogy, engineers at New Zealand–based Weta Digital made several adjustments that enabled simultaneous tracking of an actor’s body and face, facilitated on-set interaction between motion capture and traditional acting, and approximated more realistic facial aspects in animals.

The 20th Century Fox film tells the events leading into the 1968 classic Planet of the Apes, when a drug that increases the intelligence of lab chimps leads to a primate uprising. Many of the chimps required anthropomorphic expressions to connote that capacity.

In the past, motion capture actors would perform on a dedicated stage surrounded by cameras and specialized lighting. Freeing them from those confines involved trading suits wired with markers reflecting optical light for ones with LEDs transmitting infrared light. Algorithms transpose the actors’ motions onto the computerized primate bodies based on skeletal biomechanics.


“We wanted the performers to interact directly right on set,” says Joe Letteri, Weta’s multi-Academy Award-winning senior effects supervisor. “With infrared, we didn’t have to worry about specialized lighting interfering with film cameras or sunlight.”

The facial muscle group activations of the actors were determined by analyzing facial movements frame by frame. These were then transferred onto the corresponding muscle groups on the chimp faces, so they performed the equivalent expressions as the actors.

Mark Sagar spearheaded Weta’s facial technology, basing it on a system that categorizes facial behaviors by muscle groups known as the Facial

Action Coding System, developed by noted psychologist Paul Ekman.

“It’s a vocabulary for describing facial expression by isolating muscle groups that make up any expression,“ says Letteri. ”We use that as the basis of our analysis. Then we developed solver algorithms to figure out the other facial muscles activated when we need to mix in movements resulting from dialogue and secondary dynamics caused by running or other directed movement.”

Weta Digital’s technology has applications beyond Hollywood, including medical research, engineering, and optics. “We’ve been collaborating with a couple of universities on different aspects of facial solving programs,“ says Letteri.

To better understand the mechanics of the face, Weta has also been working with the Auckland Bioengineering Institute to create detailed models of anatomy, as well as to visualize and measure what is going underneath the skin during facial motion.

“We were experimenting while the movie was in production,” says Letteri. “It’s a constant process of refining—we’re still improving it. The only thing holding us back is a combination of computing power and our own understanding of how facial muscles work.“

Catch Glenn Zorpette’s Spectrum Radio podcast on WETA technology here.

[Photo credits: 20th Century Fox, Weta Digital]

The Conversation (0)

Asad Madni and the Life-Saving Sensor

His pivot from defense helped a tiny tuning-fork prevent SUV rollovers and plane crashes

11 min read
Vertical
Asad Madni and the Life-Saving Sensor

In 1992, Asad M. Madni sat at the helm of BEI Sensors and Controls, overseeing a product line that included a variety of sensor and inertial-navigation devices, but its customers were less varied—mainly, the aerospace and defense electronics industries.

And he had a problem.

The Cold War had ended, crashing the U.S. defense industry. And business wasn’t going to come back anytime soon. BEI needed to identify and capture new customers—and quickly.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}