On locationFrom a technical point of view, Rise broke new ground by extending the motion capture volume into on set locations, breaking the studio based focus that had previously been the norm. In Rise the capture volume worked in exterior locations for the key golden gate bridge scene at the end of the film. This multiple actor external capture dynamic greatly improves the director’s ability to work with motion capture as a tool of filmmaking than as a second unit style effects shoot. Weta Digital deployed a set of more robust and portable motion capture volumes on location than ever before, which were able to stand up to the rigors of extreme conditions from sleet and rain to extreme heat and 50 ft high walls of flame. While its use for the earlier film was limited to some major action sequences, the difference in approach in Dawn was key. For VFX to move from post to production it needs to be first unit; on-set capture in real locations is key to that. On Dawn this move was completed with the team from Weta Digital standing shoulder to shoulder in the mud doing a range of motion capture in forests, at dams and in the city locations in New Orleans (which doubled for San Francisco). For Dawn to technically succeed, Weta needed to be the production team not the post team. The idea of a wrap party at the end of principal photography denoting the end of the main body of the film making as with films from days gone by is clearly misplaced when thousands of digital performances are still to be added via motion editing and skilful animation, entire sequences such as the third act tower fight climax are nearly entirely digital and your lead actors are not even out of the ‘digital makeup’ department yet.
Motion capture techIn this film, stronger LEDs were encased in a flexible silicon on the actors’ suits – allowing the performers more range of motion and yet readable in dark conditions such as the forest or the night filming in New Orleans (doubling for San Francisco). The wireless cameras gave the team great freedom and flexibility as well as faster setup time and calibration, all without the burden of laying out meters of cable and wiring.
Rather than one approach the team had a range of options. In a perfect setup where access and acting styles allowed, the team would employ a multiple mocap camera system with performers wearing active marker suits, along with additional reference video cameras (Sony F3s), and actors wearing single head mounted face cameras capturing HD at a high frame rate – all of which were synchronized to deal with camera shutter times. Sometimes up to 40 mocap cameras were employed depending on the complexity of the set and the number of performers in the scene.The advancements that R&D brought meant that the software was more accurate and could accurately provide motion data even at further distances from the principle camera than ever before. The team still used a mixture of mocap data and hand animation, some scenes such as the baby ape (Baby River) at the dam being full hand keyframe animated, many other scenes were hybrids. There were a total of 29 Apes, 12 of which would be categorized as hero and 20 ‘extras”, plus the additional 4 ‘guard’ gorillas, and now 4 orangutans. All of the hero apes were redesigned to reflect aging and their growing wisdom. Most of these required motion capture for most of the film.
“On Apes 2 there were many challenges on the shoot,” explains motion capture supervisor Dejan Momcilovic, “and it was really the next stage of what we did on Apes 1. The series has pushed the technology quite a bit. The majority of this movie was really shot outside with any kind of elements, rain, wind and water – it’s been challenging for the crew and the technology.”
Motion capture performers were put through ‘ape school’ run by actor and movement coach Terry Notary that involved both working out ape choreography and examining that inside the motion capture volume. The actors, including Andy Serkis (Caesar) and Toby Kebbell (Koba), would perform in capture suits often with arm extensions to better match the ape limb physiology. The principal live action was directed and filmed by Matt Reeves and DOP Michael Seresin on a stereo ARRI Alexa M rig. But if the acting called for Andy Serkis’ Caesar to lean in and rest foreheads with Jason Clarke playing Malcolm, or the setup time or location would not allow the full motion capture volume, then the team would work with a reduced technical environment known as ‘faux-capture’ using reference cameras that was much more a triangulated point cloud object tracking solution.
Just like any DOP – if they could use the ‘A-camera motion capture rig’, they did. If they needed to shoot fast and dirty to allow the actors to act – then they did that. This approach of fully integrating effects into production was not just a one way street. All the departments on set fully integrated into this new model. So much so that the green’s department were so good at hiding the motion capture rig cameras in the forest that the Weta team had to institute an end of the day camera count. “We literally had to make sure we didn’t forget a camera amongst a bunch of moss,” jokes Momcilovic. “A few times some things were left behind and we had to go back.”
The advancements that R&D brought meant that the software was more accurate and could accurately provide motion data even at further distances from the principle camera than ever before. The team still used a mixture of mocap data and hand animation, some scenes such as the baby ape (Baby River) at the dam being full hand keyframe animated, many other scenes were hybrids.
Motion capture data was captured by Giant Studios who tracked, solved and retargeted the data with their tools before it was brought into Motion Builder. “Pretty much everything here is custom made by Weta as in house tools,” notes Momcilovic. “It’s very consistent with the rest of the pipeline. If we set up some lighting on the scene then we know it can transfer to the next stage of production.”
A significant motion editing effort was then undertaken on the performance capture data. “We have a tracking team that cleans it up, makes sure the performers look right, then there’s a re-targeting pass done by motion editing department,” says Momcilovic. “We have a pretty elaborate transfer system from a human to an ape. Then they address the interaction with the environment and all the props to make sure the performance as intended gets passed onto the shots.”
Interpreting faces for performance
On Dawn there is a greater complexity in the facial puppets and their rigs than ever before. The result is facial performances that hold up remarkably well in a huge number of tight facial close-ups, covering a wide range of emotions and story points. The audience gets to see a great array of emotions and expressions – through subtle and complex CG rendered performances.
The motion capture data of faces by its very nature is a sparse data set that needs to drive very complex facial meshes. For this film, Joe Letteri noted how successfully the team had improved the Weta face solver that is the first stage in the process. Data from the head rig is analyzed and solved with the facial solver to produce the set of blend shapes and facial muscle movements. As part of this, the capture is stablized, and then translated not only from human to ape facial shape, but also into a system that allows the animators to adjust lip sync, correct expressions and better match the acting choices of the MoCap artists when shown on an Ape face.
Capture data could be applied directly to a set of grouped points in a face rig, but such a system is rarely used and would only work when the capture face and model are of the same person, even then errors such as noise and errors due to the spare original sampling are hard to correct and adjust to remove. By using a facial solver the mocap data is interpreted and translated into data into a different format that can be processed, filtered and verified as well as manually be adjusted and enhanced. It is also possible for certain combinations of blend shapes or expression to be created by the data and or the hand animation. These are not character achievable or “off-model”. When this occurs a corrective shape is automatically triggered that avoids the bad combination and produces a substitute expression.
The same issue applies to the body skeleton solver. While the mocap data is very good, even with arm extensions and other techniques, the mocap actors will not perfectly match ape limb length and joints. “The hips will be wrong or too high, moving that down changes the head and where the face was looking,” points out Letteri. As a result the mocap still needs to be interpreted as part of the re-targeting process.
Caesar was aged 8 to 10 years older from the CG model used in Rise, with more gray added and deeper wrinkles. The mouth was also adapted to allow for greater believability with the increased dialogue of the new film. This lead to an adjusted lip line and small adjustments to the facial anatomy to enable the animators to have two new sets of controls. One set allowed for a more accurate ape ‘hooting’, the other for more accurate and readable human speech, as Caesar delivers considerably more dialogue in this new film.
The most noticeable aspect of the framing and cinematography in Dawn is how much the action is played out in close-ups especially on the ape’s faces. There is an enormous amount of close-up work considering that the faces are fully digital and need to hold up to long and subtle performances, especially the opening and closing shots. Every aspect of the ape faces had to be crafted – from the refraction of light in the eyes, to the tongue and jaw movements during the dialogue sequences.
While the mocap actors wore individual face cameras mounted from their helmets, the mapping of human expression to ape is not automatic. After registration and alignment there is an automated tool that does the first pass as analyzing and retargeting the performances, but the shape of the skull, size of muscles above the eyes and especially the mouth to snout lip differences makes this a far from fully automated process.
One interesting aspect were the minor revisions done to the base Caesar model. More of Serkis’ unique wrinkles and in particular eye lids were added to Caesar. The subtle yet accurate maps meant that it was easier to achieve the subtle expressions Serkis gave, for example during Caesar’s dialogue with his son, Blue Eyes, back at his old house where Caesar is lying wounded. This whole scene is an excellent example of the digital characters carrying the weight of the original mocap performer’s emotional responses.
While both Caesar and his son are still very ape-like, their faces convey an enormous amount that is just not in the script as dialogue. Blue Eyes’ hesitation and facial expression at being reunited with his father while carrying a rifle – which he awkwardly places just outside Caesar’s room – is just the start of a great acting scene. This sofa scene is perhaps Andy Serkis’ finest or at least most restrained yet powerful scene. Both on the sofa and in the loft with the video camera, Sekis and the Weta artists deliver an outstanding piece of acting. The effort was made all the more difficult by the incredible heat and humidity Serkis faced while filming this in New Orleans in a gray mocap suit, in a completely un-air-conditioned building location.
Blue Eyes also features in a strong acting scene when he aids in Maurice and the other loyal Caesar supporters to escape their temporary bus imprisonment. Matt Reeves did not fully get the sequence of shots to tell this part of the story on set, so Weta provided not only matching performances, but additional mocap and set reconstruction. This all helped completely flesh out this scene into the clear narrative that is seen in the film. Maurice, a fan favorite from the first film, was played by actress Karin Konoval. Maurice is perhaps the best example how far the current films have come from the makeup solution of Limbo, the orangutan trader of human slaves, played by Paul Giamatti in the 2001 Tim Burton, Planet of the Apes. While Rick Baker did incredible make up for the Burton film, Weta’s team, freed from the constraints of human anatomy, could achieve far great realism and ape like features. Weta’s digital Maurice has 912,783 strands of fur including complex eyelashes, facial hair and ‘peach fuzz’, and is even more complex than even Caesar who had over 820,000 stands. Maurice is teacher, adviser and Caesar’s close supporter – he also has one of the more interesting live action, human-ape interactions as he explores Charles Burn’s graphic novel Black Hole, with Alexander played by Kodi Smit-McPhee.
Koba’s remarkable acting is seen very strongly in the two scenes in the armory. In the first scene Koba is discovered and must play the fool to escape. In the second he uses the same device to return and take control. “Toby puts in a great performance there,” says Dan Lemmon commenting on the scene and mocap actor Toby Kebbell. “You have this change from malevolent and menacing Koba to this silly sort of circus ape performance. You see Koba’s decision, you see him snap into it and put on an act – and it’s a layered performance.”
For the second scene where Koba returns, Toby Kebbell worked with the director to flesh out the script and build on the idea of Koba playing the fool. “Matt was very open in this scene when I suggested that there might be a different way to go about it than what was in the script,” Kebbell said in an interview. The script was fanatic and tight as a drum bit it was just one of those where I felt like the cruelest thing for me is when someone comes to something very violent in the beginning in a very friendly manner.”The entire pipeline supported Weta’s Deep Comp pipeline, points out Lemmon, extending even further the work Weta also started around the time of the first Apes film. In this new film, the complexity of the massive number of digital characters and their placement in a stereo environment all made the Deep pipeline invaluable. It also aided in being able to work more efficiently as the team could adjust individual characters and avoid vastly complex hold out mattes or complexity of placing digital characters in scenes with smoke, fire, debris and dust. The Deep pipeline was a key part of the NUKE pipeline and was rendered as standard on every CG shot and element.
EnvironmentsDawn has many major digitally enhanced environments:
- The ape village (extended from a practical set)
- San Francisco (digitally altered and extended to be aged and deserted)
- The tower, above the human base (entirely CG)
- The Golden Gate Bridge and zone checkpoints (digitally created or enhanced)
Roto and paint in a stereo worldDawn was shot in stereo and often on location with remote location capture volumes. While this allowed the actors and motion capture artists to interact realistically, it meant that for those takes a complex stereo paint and roto task was required to rebuild a clean plate. A great example of this was the scenes with the apes riding horses. While some fully digital horses were used, in many cases it was decided to motion capture Serkis and the other mocap artists on real horses and then remove not only the riders but their saddles and associated tack. At the best of times it would be hard to animate digital apes to look like they were on the bare backs of horses, but this problem was made more complex by the mis-match between saddle-less apes versus the reference live action with saddles. Add to this the differences in limb length and that it was happening in stereo, and the Weta paint and roto team had their work cut out. The animators and motion editors used the mocap of the top part of the mocap artists but needed to hand animate how the apes sat on the horses, and believably held on without saddles. They needed the cleaned up horse plates to be photoreal and accurate not simply patched under the CG. Without perfectly clean horse backs, the detailed work could not be done by the animators to sell the shots. These shots were stereo solved in NUKE – in some cases with very complex shots by the dam’s edge. These shots required an artist, such as fxphd graduate Michael Thingnes to work on shots sometimes for over three months, to perfectly remove multiple riders and produce a perfect stereo matched clean plate. While 3D could be used in some shots, many of the shots were completed with just hard work, 2D tracking and stereo placement. For all the complex tools at the disposal of a company like Weta, it still just comes down to trained dedicated artists working for months. Quentin Hema was paint supervisor on the film, and his team was responsible for the amazingly detailed motion capture artist removal and clean plate generation in stereo, as you can see in fxguidetv – Paint and Roto were described as the unsung heros of this film, by Joe Letteri “especially as it was all shot natively in stereo”.
Into the fight they rode…for 1000 framesIn one signature shot during the colony attack, Koba commandeers a human tank into the main gate – all the while with its turret spinning to provide a single 1000+ frame 360 degree POV of the battle. The shot was designed to demonstrate an “inexorable creep forward,” says Weta Digital visual effects supervisor Erik Winquist. “There’s this whole impending doom where Koba jumps on the tank, takes out the guy who’s shooting the gun but then he still has to dispatch the driver. In the process of dispatching the driver some control or level gets knocked or a dead body gets knocked onto the wheel and the tank just starts spinning. So no-one is driving the tank, which is used as a device to show us everything – all the hell that’s breaking loose around us.” Previs for the 1000+ frame shot was carried out through Cinedev and MPC, with live action then filmed in New Orleans by second unit. A rehearsal period – carried out in a car park area – had informed the filmmakers of the precise movements of the tank, turret and some background action. In addition, it became clear that the stunt was going to have to be captured in just one pass. “So much of what we’re seeing on the screen is anchored by photography,” notes Winquist. “It really was a one take thing – once the door was blown we had no opportunities to do it again. Weta Digital then set about adding in digital apes, various amounts of explosions and fire simulations and practically shot stereo fire elements, plus set extensions. “For animation it was a big mixture of Massive driven performance in the deep background,” explains Winquist, “and then there’s motion library stuff that’s hand placed and then the very specific motion capture of Koba and apes that are walking by.” Interestingly, the tank shot was filmed to show the results of its main gun firing rounds of ammunition while Koba is on board. In the practical photography this included timed explosions in the background. Visual effects was then to include muzzle flashes and other effects to tie in the practical explosions to the action. However, since no-one was driving the tank and it was considered that apes would not have been able to operate the gun, and the background explosions were re-worked to be the result of other rocket launches and other chaos happening separate to the tank itself. Interestingly the flares and fire in this sequence are not ‘pretty art directed’ yellow fireballs. Instead the flame clip in the hottest part of the image, just as they would with real photography. “That is one thing that Matt (Reeves) really drove on this picture and that is the aesthetic of hard reality,” explains Lemmon. “He was very adamant for example that the night photography be bathed in this sodium vapour lights – from city street lighting and not push some Hollywood blue moon light. I think it is really effective and it gives it a night look that is more natural and ‘real look’ like street photography, with a monochromatic wash.” The final long shot involved 740 passes and a team of four compositors working in NUKE for over three months at a relatively late stage in the schedule. “There are just so many render passes in this shot,” says Winquist. “Animation worked in clusters and then we generally worked on the comp as section A, section B, et cetera, trying to polish off all the little bits.”
Rendering their worldThe majority of the film was rendered in RenderMan using Weta’s approach of physically based shading and lighting. One of the major advances of the film was the first use, but in a limited context, of the new Weta developed path tracing renderer Manuka. fxguide will feature more about this important new rendering advance from Weta Digital as part of our lead up to SIGGRAPH 2014. The new renderer was only used in a few shots on Dawn (the primary renderer was RenderMan18), one key example of the new technology was in the ‘show of force’ scene in San Francisco, where the apes confront and warn off the humans. Manuka was used, partly as a test, but mainly as it handled the vast complexity of so many fur covered characters in one shot.
Luckily for the production, years of collaborative work with Wellington Zoo resulted in the zoo calling Gino Acevedo, texture supervisor and creative director. Acevedo is somewhat of a gift to the production – not only is he arguably one of the best texture leads in the world, but his personal interests extend to apes and has for decades. He is internationally known for his incredible career as a texture artist, but he has also collected and worked with many of the key artists and makeup specialists who have approached the area of primate effects in the past.
As a result of a life long interest in Apes, and his work on Rise of the Planet of the Apes, Acevedo had built up a close relationship with Wellington Zoo. When one of the apes at the zoo had to be brought in for dental work and thus put under a general anesthetic, Acevedo and a small team were able to actually carefully photograph the ape close-up and lightly wet his fur with a light water spray and photograph how the fur reacted. Such direct and safe texture reference and photographic imagery is of course invaluable when trying to pull off close-up shots of the apes with wet fur as was required for Dawn.
While Acevedo’s team generated an enormous amount of material, the detailed skin, hand and feet textures are some of the most impressive. Clearly, Acevedo loved working on this film and further extending Weta’s ape texture work that followed his recent silicone innovations on Avatar and the Hobbit films in helping to achieve high quality skin textures.
What’s most interesting about this work is how it has changed and been adapted to register even more skin and pore details. Originally, digital character development would being with photographs of the actors turned into black and white displacement maps to acquire the textures. But now the team relies on translucent silicone brushed onto actors during a lifecast process. “We brush the silicone material onto the skin surface pushing it into the pores and fine wrinkles,” explains Acevedo, who passed on the resulting sculpts and also scans to Weta Digital’s modeling department for textures for the models themselves.
“The old material we used to use was an alginate which was the same material orthodontist used to use to take compressions of your teeth,” adds Acevedo. “But to make the process work with silicone we had to have a urethane cast, because the plaster was too powdery, and if we poured silicone over the top of the cast and peel it off, we’d get a lot of plaster residue that would clog up the fine detail. With the new silicone material, right away after we take the mold off the actor’s face we can pour in urethane directly so we don’t lose a generation.”
At Wellington Zoo and other visits to animal sanctuaries, Acevedo was able to also photograph primates and a grizzly bear for texture reference. He also used an alginate mixture to take casts of apes hands and feet that would then go through the same texture process. Another of Acevedo’s contributions came in the form of testing the war paint applied by the apes with real paint mixtures, that would then be referenced by the digital paint artists. “We cast some ape heads out of silicone, and we made up different kinds of paints and added sand to it so it was quite gritty,” he explains, “and then you would use your fingers like finger paint and you’d move it around and see it crack and peel up a little bit like it normally would.”Digitally, the hair on the apes was all re-groomed with a new set of fur tools. The team used a new version of their Barbershop software to groom dry and wet fur, and to add war paint with paint chips and clumped dirt. The shader pipeline has also evolved to a point that it can take into account the way light behaves as it goes through hair and account for dual scattering – leading to more realistic renders. There were 95 unique fur grooms, 54 for the hero apes and an additional 41 for the extras.
Trackback from your site.