Blog

Netflix NextGEN Blender review

********* WORKFLOW INTEGRATION********

Shane Jackson was the Surfacing Supervisor on the film and he wrote:
We created a lot of shader node networks in house, and some custom hair and skin shaders too! The challenges came when we were doing massive sets and needed techniques to cover wide areas while retaining resolution, something we couldn’t do with sets of textures from Painter alone.
So long story short, shaders totally in Blender, created with methods that Im sure you guys have seen, and as much procedural stuff as we could get away with.

********** RENDER TIME *********

The average per frame was 3.76 hours on DCI 2K format for NextGen – some sequences rendered a lot more quickly, some took a lot longer (think: water effects via Alembic caches, 20-30 VDB’s in a single shot for smoke, fire and explosions, etc). There are obviously many redo’s and artistic retakes involved, but if you eliminate those, here’s a rough breakdown:

93 minute movie = 133920 frames @ 24 FPS
They rendered 5 different versions of the movie: English mono, English and Mandarin Stereo (for the Chinese market), so this is 669600 frames
Each frame took 3.76 hours on average
One machine would take 2517696 hours, or 104904 days to render the movie. We had around 2500 nodes working on it, which equals roughly 42 days

In reality, the Mandarin lipsync was only done for the shots that warranted it for facial animation, which was about 350’ish shots. Those renders were done by using an animated render region, targeting just the mouth, so those renders tended to take 1/4 to 1/2 the time of the full renders. These regions were then composited on top of the English renders to replace the lipsync.

Shane Jackson was the Surfacing Supervisor on the film and he wrote:

When we needed to modify a specific pass, we extracted the pass from the beauty pass (which was the main pass used in our composites), modified it, then added it back to the beauty pass. Cryptomattes were crucial for this.

The compositor is slow, but it was fine for our purposes. We have plans to add caching to it, and add a mask channel to nodes that are missing it – there are workarounds, but it’d be nice to be able to use all nodes in the same fashion.

****** CROWD SIMULATION *******

Many shots used an amazing add-on by Sam Wald that brought in rigged characters and instanced them intelligently while applying unique armatures, allowing for randomize and offset action blocks.

The crowd running near the end was largely Golaem a crowd in Maya, exported through Alembic. A custom script was used to reassign materials to the Alembic objects when they were imported back in.

The robot crowd near the very end was kind of custom. Nothing was giving us the control we needed so they faked it using nParticles. That gave them free collision avoidance and allowed them to apply fields to art direct the movement.

Long story short, if you can use hair instancing for massive crowd shots, do it. Cycles instancing is fantastic when employed correctly.

Are you ready to start using Blender?

I hope you enjoyed this review.
Don´t forget to subscribe, hit that bell button for instant notifications and comment

What do you liked most about NextGEN?
Thanks.

Pages ( 2 of 2 ): « Previous1 2

Share your thoughts here.

This site uses Akismet to reduce spam. Learn how your comment data is processed.