Reading Time | 9 minutes
The amount of detail in the environments is fantastic. What kind of technical issues did you have to overcome to render Mechsplorers?
The technical aspects of Mechsplorers offered quite a few challenges. We had not done anything of this scope before, and it took a little bit of working out to be able to pull it off. MAKE is a smaller studio, so the 3D artists often had to wear several hats and be pretty nimble to cover all the ground to complete the spot. The most important thing for a smooth production, besides communication, is a functional pipeline. Ours got an overhaul during the Mechsplorers process. Much of our pipeline has been pieced together over the last 10 years. Whenever something can be streamlined or optimized, a new tool or process will often be quickly prototyped in a few minutes as a barebones script. Then, as its use is more defined, it becomes a full fledged tool. Eventually the gaps between tools get smaller, and artists are able to get more done, both quicker and more reliably.
For example our character pipeline relies on multiple versions of the character rigs. We will have an animation rig that is light and fast, an FX rig or full rig, and a Render rig. The animation rig is a lighter version that plays back in real time with any other characters in the scene, so the animators can see their work as easily as possible, and have good feedback in viewport. Once completed, the animation is brought onto our full rigs and we’ll run cloth and hair simulations and cleanup any undesirable deformations. Once everything is looking good we’ll cache out all the deformations, and load that onto our render rigs. These are the ones that actually get rendered. They are just meshes, referencing point caches, so nothing extra is in our layout scenes. In order to make this multi-step process easier, we have some in-house caching tools that can write out a character’s deformations with one click, version them easily, and automatically load them into the render scenes.
The most important thing for a smooth production, besides communication, is a functional pipeline.
If you saw Mechsplorers, you could see that there were many assets to keep track of. This required a reliable, but flexible way to create, publish, and use assets. For example, the Spaceship/Base was essentially a background object in many of the shots and could be a static proxied mesh, but in a specific shot the doors may need to open. So an Artist could swap between the lightweight proxied mesh, and load a ‘live’ version of the asset to load animation onto it. This also works for making one-off tweaks. If a foreground plant doesn’t have enough detail, or move right, it can be swapped with a live rig. It can then be dialed to taste right in the scene, and re-cached if desired. The AssetSwitch tool also helped with the sheer volume of assets and materials created. Keeping them all consistently named, sorted, collected and findable, as well as caching them out in a lightweight, renderable format.
Speaking of publishing out cached objects; on first glance it may be easy to miss, but all of the vegetation is animated blowing in the wind, from the grass to the leaves. Most of the vegetation was created using one of the heaviest lifters on Mechsplorers, Leafblower; our in-house vegetation animation and scattering tool. Between the start of Mechsplorers and now, Leafblower gained many features and went through dozens of builds that were used in production. Without it, we would not have been able to create what we did. It started out as a way to animated leaves, but over time its animation, placement, and resource handling aspects got it press-ganged into placing every blade of grass, rock, and tree in the environments. Similar to multiscatter or forest pack, Leafblower allowed us to populate and animate entire forests, while keeping our scenes relatively light. When working at this scale, keeping things easily renderable was a perhaps the most challenging goal.
Actually rendering these scenes required significant planning and organization. From the obvious, like splitting apart as many layers and depths as we could, to the less obvious, like omitting character hair casting shadows when rendering environment passes, everything had to be considered. We have another in-house tool that started as a free plugin, but over 10 years of tinkering and development, is quite a bit more robust and capable. Using this render manager, the TD(Technical Director) could set up the look or renderer properties and deploy those presets to the artists working on shots. This keeps everything as consistent, on model, and efficient as possible.
The two walls we would bump up against are render time and memory usage. Render time in VRay is getting better every day, and using the progressive renderer, with our decent on-site renderfarm, most frames could get back in 10-15 min. This would be enough to see and make changes during the day, and every night higher quality frames could be made. For memory usage, everything possible was cached and instanced. VRays architecture, and our asset / scattering tools gave us forests for the memory cost of just 11 unique trees. In some instances we were scattering tens of thousands of 500k poly animated trees. In other words, billions of polygons for a couple gigs of memory. Most passes were able to stay under our 21 gig footprint for a scene with full Brute force GI, motion blur, Hair, and AO. That way we could render even on our oldest (7+ years old) render nodes. Another useful method for reducing overhead was trimming down scenes to just the essentials, by aggressively removing anything outside the camera’s view.
Can you talk about how long have you been working on Mechsplorers and how long it takes to complete an episode?
Well that is a little difficult question to answer, because the first concept art was created over 7 years ago. However Mechsplorers is a project that happened in the gaps left (or intentionally made) between client work. Once the designs were settled upon and scripts made, there were several years when animatics and assets were made, remade, and remade again. This iterative process helped us get to where we are with the current look and feel. Over that time, techniques, tools and software were being learned and written, by us and others as well. We did end up switching how our liquid and fluid simulations were being done; even after shots had been completed. Once everything was said and done, the first episode did take 7ish years (with gaps and time outs) to get wrapped up. Once we have things a little more figured out, it took under a year for the second spot to be animated, and the third episode is already moving along at a nice clip. Now we have gained the experience and built the infrastructure to move much quicker.
…we could constantly be working ourselves in circles, so it is important to have the end goal in mind…
Do you have any tips for anyone taking on a large scale production like Mechsplorers?
There are many things we learned and we can offer some tips for taking on a project of this scope. First off, finding the design and look and keeping on model was a huge challenge. Over the years artists came and went, preferences and sensibilities changed, and technical ability grew significantly. Model sheets, and reference shots were important to try and keep the look uniform. The shots for the intro were completed and used as a style guide for the first episode, but once completed, the first episode had shifted look a bit for the better, so we re-visited the intro to get it looking as good as we could. It is a difficult line to draw, because we could constantly be working ourselves in circles, so it is important to have the end goal in mind as well.
Another aspect to making work and life easier on a project like this is to re-evaluate and plan your shots after the animatic is completed. Some shots and animation can be ganged into one scene with multiple cameras and frame ranges, even ones that are not immediately obvious. Another aspect is working wide to tight. Building, lighting, and laying out an environment multiple times can be a lot more work than making the widest shot first, and then using that work and detailing it for the closer shots. That thought process works for more than just building things: It works also for blocking your animation and refining from there. Or using derivative or scripted comps to fill in the big gestures once, and then refine on a per shot basis. Repetition is a frustrating and time wasting component of a project like this, so automating as many repeat steps as possible was a huge priority and we needed to build a pipeline that could get us there. Many of the tools, plugins and scripts that were made in service of Mechsplorers, have proven invaluable on our other projects, particularly the ones with detailed environments, or numerous assets. Our tools almost always started off as a way to automate a repeatable task, and started out as a one or two line script. So anyone with even an introductory level of scripting can save themselves many, many hours. With Deadline and our onsite render farm, we were able iterate constantly, and see an animation update published at 5:30 be rendered, comped, compressed, uploaded and emailed by the they finished dinner.
One final tip is forming your workflow to separate everything you do into as many separate, independent, parallel tasks and components as possible. For example, each of our render scenes consists of mostly a couple pieces of geometry or lights, and a lot of caches. That way, things propagate easily, and the layout artist can be lighting a shot, with a first version of a character animation, while the animator is still working on their shot. Another artist can be working on shading or making some more assets, while a stand-in or proxy is being used for blocking. Separating things out lets everyone keep working as much as they can, and leave the exporting/importing to the tools to handle for you. The main take-away is break things up into as many referenceable components as possible, so as many people can work on things at the same time, and be able to bring that all back together as easily and quickly as possible.
Repetition is a frustrating and time wasting component of a project like this, so automating as many repeat steps as possible was a huge priority…
What about software?
Here it comes, the dreaded question: “what software do you use?” Well our base platform is 3DS Max. We also relied heavily on Chaos Group, for VRay, and PhoenixFD. VRay made the project possible with more than just speedy renders, but its VRMESH caching, and Phoenix FD for both the rivers’ Liquid Simulations and the Fluid simulations for smoke, dust, clouds, and waterfalls. Our character hair was done with Hair Farm, and some of the animated grasses as well. We also relied quite heavily on Thinkbox for things like Krakatoa, XMesh, and Deadline as our render farm management and software deployment tool. We also developed and used our suite of in-house plugins and scripts that would save or load animation poses, offer marionette controls, cache out assets or animation, manage multiple versions with the AssetSwitch, and render it all with our customized render manager. For compositing we used After Effects and a handful of plugins and scripts and tools to manage footage, versions, and proxy render out some of the precomps, which could get pretty heavy. All in all, working on this project has grown our skills and abilities, and help us do more, better work, easier. If this exhaustive explanation wasn’t enough for you, feel free to hit me up with any questions or more detailed explanations.
Where can we watch it?
The full episodes for Mechsplorers are not available online yet, but here is the intro sequence:
Aaron Dabelow is Technical Director at MAKE and a Nice Moves mentor.
If you have a question for him, drop him an email: firstname.lastname@example.org
Are you ready for the spotlight?
Submit a short & sweet personal bio or project summary and we'll be in touch!