The men and women who survived a deadly virus that wiped out much of earth's human population hunker down amidst the ruins of San Francisco; meanwhile, a growing ape population has built a lovely and thriving community outside of San Francisco in Muir Woods. The tension between the two societies drives the action-packed Dawn of the Planet of the Apes, the sequel to the 2011 Rise of the Planet the Apes that starred James Franco.
As this sequel begins, Franco's character has been dead for a decade, and the apes have had plenty of time to create their version of civilization. But in real time, it's been just three years since the Rise movie. In the world of motion picture technology, though, that's an eternity. Long enough to create computer graphics gear robust enough to take out of the studio and deep into a real forest. And long enough that moviemakers no longer need to give a recognizable Hollywood star top billing to bring in audiences.
In fact, if you passed the leading man of Dawn—Andy Serkis—on the street, you wouldn't recognize his face at all, for you never see it on the screen. That's because his performance in the woods (actually, forests near Vancouver, not San Francisco) wasn't fillmed traditionally, it was motion captured, and used as a framework for a computer-created realistic digital ape, Caesar. And, for the first time in my knowledge, it's the performances of the motion capture actors, not the regular actors portraying humans, that are getting all the good reviews from critics; there is even talk of the first best-actor Oscar nomination for a motion-capture performance.
Motion capture enables moviemakers to create realistic non-human characters—including Gollum in Lord the Rings, the Na’vis of Avatar, and the intelligent chimpanzees and orangutans of Rise of the Planet the Apes. It has also let moviemakers digitally tweak human characters, aging Brad Pitt in The Curious Case ofBenjamin Button.
The technology uses a network of carefully calibrated monochrome cameras that track the movements of reflective markers attached to key spots on the bodies of actors and then use built-in processors to extract the precise coordinates of the markers. A motion capture movie set also uses a handful of regular high resolution video cameras to record the overall scene for the director and others involved in the production to use as reference. Later, animators correlate the data about the marker locations with the same points on virtual characters—like shoulders, knees, and feet.
After it is imported into the computer system, the data about the movement of the markers becomes a connect-the-dots representation of how the human actor moved that drives the digital characters. Animators later adjust the movements to better match them to ape physiology. It’s a complicated business and has historically taken place in a studio or, at the most extreme, a small and contained outdoor area, where lighting, shadows, and reflections that could impede the tracking of the markers can be carefully controlled.
But the movie Dawn of the Planet of the Apes, being released into theaters on 11 July took motion capture into the wild, with more than 85 percent of the movie shot outside the studio, in forests near Vancouver and in various outdoor locations near New Orleans.
Much of the movie action surrounds a community of 2000 apes, living in a rainforest-like environment. The technical challenges were huge, reported Joe Letteri, senior visual effects supervisor, and Dejan Momcilovic, motion capture supervisor, both from New Zealand’s Weta Digital.
“We went deep into the forest, where it was raining and wet,” says Letteri. “It was the absolutely worst conditions we could have had for getting the [motion capture] to work reliably in.”
The effort used about 50 motion capture cameras and eight standard high definition cameras. So many were needed, Letteri explained, because, unlike the previous movie in the series, Rise of the Planet of the Apes, in which most of the scenes show apes individually caged, the apes in this movie are running all over the place, in a visually crowded environment. With multiple actors in action running between trees, markers are frequently blocked from any one camera’s view; 50 cameras meant that no matter what happened at least one camera would be tracking most markers. The crew set some of the cameras on the ground, camouflaged by moss, put others on 5-meter tall aluminum towers that could each hold several cameras, put single cameras on poles attached to flat bases in places where they needed to blend into their surroundings, and, in some cases, simply tied the cameras directly to tree limbs. They camouflaged them well, said Momcilovic, and in post production carefully looked to remove any evidence of the array of cameras but, he said, it’s likely that an audience member watching the film closely will be able to spot the occasional camera lens peaking out of leaves or moss.
The set up had to be assembled for each day’s shoot and calibrated by a crewmember walking the scene carrying a rod with markers to make sure multiple cameras could spot each marker at every place an actor might go. A good gust of wind, or someone bumping into a camera pole, could disturb the position of a camera enough in order to require recalibration. And the whole setup had to be packed away at the end of the day, even if the crew was planning to come back to the same spot, it couldn’t sit out in the damp forest.
The cameras had to be protected from the weather. And they had, for the first time, to communicate with the local data server wirelessly, adding to the complexity of getting this whole setup to work reliably. The team from Weta designed the outdoor housings and the Wi-Fi add-ons themselves. And it was a crunch—the camera cases, eventually manufactured in Canada, were still in prototype form when the crew arrived on location.
In spite of the special cases, a lot of the equipment had to spend each night surrounded by silica gel in order to dry out. And keeping everything going required the efforts of a software developer, Glenn Anderson, who also knew a lot about electronic.
“He was fixing stuff full-time,” Momcilovic recalls, and along with programming changes in the software, using a custom web site that pushed out updates to the array of cameras while they were in use. (Anderson carries a certain amount of tech-cred outside the computer graphics world: he developed the Eudora Mail Internet Server back in 1993.)
Each actor wore 48 markers on his or her body; faces, filmed with head-mounted cameras, were dabbed with spots of white paint. Because of the lack of control over the lighting in an outdoor environment, the Weta team couldn’t use standard reflective markers. They had previously developed infrared LED markers for a brief outdoor scene in the previous Planet of the Apes movie; these, however, weren’t reliable enough to use for extended filming. So, Momcilovic says, they molded them strands of translucent plastic, strong enough to take a direct hit from a hammer without damage to the LEDs inside. The strands, attached to the actors with Velcro, could be controlled remotely; turned on or off, or made brighter to deal with changing light conditions.
As many as 13 actors wearing these active LED markers appeared in a scene at one time. The processors in the cameras, after analyzing the video on the fly to extract the marker locations, transmitted the data to a server on site. As soon as the crew brought the server out of the forest and onto the Internet, it sent all the motion capture data to New Zealand, where Weta’s team of animators could immediately start the process of using it to create digital apes.
Though the weeks shooting in the Canadian forest presented the biggest technical challenge up front, the toughest scene for the computers to process afterwards turned out to be a fully computer-graphic industrial environment, set outside of New Orleans.
“There’s a big ending sequence,” says Letteri, “with a large number of apes on a half-built skyscraper, at night, with lots of lights throughout. Everything you see in the scene is CG—creating the fur, the muscles, and the facial simulation in that environment took an intense level of rendering,” about 550 core-processor-hours per frame. The team, of course, used banks of computers with multiple cores, typically 200 to 1000 processors would be in use at a time, though sometimes as many as 50,000 ran simultaneously. Was it worth it? I intend to find out when Dawn of Planet of the Apes opens Friday.
Tekla S. Perry is a senior editor at IEEE Spectrum. Based in Palo Alto, Calif., she's been covering the people, companies, and technology that make Silicon Valley a special place for more than 40 years. An IEEE member, she holds a bachelor's degree in journalism from Michigan State University.