Blog

Realtime & GPU acceleration on the CG race

Whenever you get a new graphic card for your computer a “drivers” disk is provided. On that CD there are “demo”installers which are stand-alone executables to show you the incredible improvements of the current graphical architecture. Nvidia and AMD are the leading companies on this field. In 2014 Nvidia showcased in their expo 3 and 4 way “SLI” cards acting as a remote server for 4 PC stations which allow the user to work remotely with a powerful real time visualization from back end. AMD has been steadily growing in the market of portable and gamming laptops with their new core graphic card “Zen”. These two companies both have a common interest: The race for faster realtime rendered graphics.

Why “real” time?

Whenever a 3d scene is represented on the screen it takes a lot of CPU power to calculate light bounces, material surfaces, polygonal count on the objects, liquid simulations, etc…. Graphic cards are designed to take the load off the computational power from the cpu and transfer it into the GPU for faster renders or in many cases -no render- at all; and this is known as realtime playback from the computer to the screen with no delay -just like a regular tv show- With such advantage… is there a need to render graphics or is it better to see them directly rendered?

Rendered vs Realtime

The traditional approach for creating computer graphics has always adopted the “artistically directed” view. Thus it required that many aspects of the CG image were rendered before hand (represented as a series of image sequences) as separated passes (AOVs) to “reconstruct” them back into compositing workstations. While this is good for small productions, you can quickly get an idea on the mayhem it can cause on very large productions (naming conventions, object versioning, cache simulation tests and approvals, dailies, cloth layering simulations) if all the same passes are to represent the final scene.

If you create (i.e: a gold ring) that ring could have more visible scratches (pass 1), Solid color (pass 2), Shadows (pass 3), specular highlights (pass 4)….I think you get the idea. All this is “prepared” before hand for the daily meetings and feedback on the process of making that ring for the “art director” review and approval. Once it´s approved, the CG team (the shading dept) tweaks and finishes the shading for that one ring, gets approved for the “next” meeting and you´re done. Let´s go onto the next rendered element: “the hand”, repeating the same process…(yeah, this is real, no joke).

This clearly explains why there can be months and months of work on materials and shading surfaces before handing them down to the art department for reviews and approval, and even so, they can still tweak things and looks up until the last second of the production process.

So the basic workflow is:

  • Render several passes to compose an object
  • Render backgrounds and their passes
  • Get all the image sequences (generally layered .exr images)
  • Re-compose all the images on the main compositing software
  • Get approval

This myth of: “I´d like to change everything at anytime at any part of the production to generate all the assets and passes possible”, can be a pretty daunting experience for anyone who isnt´t used to the CG industry.

On contrast, realtime representation of 3d scenes is based on displaying everything directly through the GPU. Hence the graphical aesthetics and art of the scene relies on the computational power of the graphic card. This saves time both the Director´s and the Shading department´s. Now, everyone knows exactly what the final image would look like on the same meeting.

Why realtime is a huge advantage?
  • Universal scatter light and reflective property shaders are used (bdrf)
  • Materials are embellished through “post” camera effects
  • Looks can be decided instantly using predefined LUTs
  • Simulations, Cloth, hair, can be isolated for faster rendering

As a necessary step for realistic results, bdrf shaders on realtime demand more maps than traditional materials. A normal map, a vertex map, a cavity, specularity, roughness, etc.. represent aspects of the material and how the light is scattered on the geometry for realistic results.

With all of that as our introduction and foundation, let´s get into the substance of the article.

The race for faster renders based on CPU power and (partially) on GPU power for the average user, started with Vray and Maxwell renderer in 2009.

Earlier than 2009 it was possible to “emulate” physical properties with a normal map, AO map, and some shading direct code (.fx) for scenes on all of the Autodesk, Discreet and Avid softwares but they were very limited for what the core graphic card could do (directX 8 and below).

Back in 2010, two big surprises dropped into the CG arena: Keyshot and Match studio Pro which outmatched the possibilities of realistic rendering in a few seconds using GPU against their CPU competitors. The posibilities these two softwares brought into play were amazing using global illumination, soft shadows, light scatter and subsurface scatter materials right on the viewport. Both softwares had their own plugins for Autodesk, Foundry and Google Sketchup3d software and most of the line (Rhino, Lightwave, etc..) and also as a stand-alone application.

The Bakery and iFX Clarise started on the realtime and gpu acceleration market back in 2011, by then DirectX technology is around version 10, where more outstanding demos (hair, particles for water surfaces, sub surface scatter..etc..) were running in vivid realtime (most of the cards benchmarked 72fps). I got a demo licence from The Bakery which essentially I couldn´t run on a low end graphic card which I owned. Forgot all about GPU acceleration for graphics for good 4 years.

Today´s competitive CG market drives client to get instant results towards ideas and concepts, almost like a finished product. Realtime CG graphics is the answer!

In 2014 I was testing Arnold, Modo, Vray, and Mental Ray for speed renders (you can watch the video here), and I got to know REDSHIFT. A 3DWorld magazine article and online advertising covered this renderer and I downloaded their watermarked demo with Softimage and the results on speed were outstanding (on a Quadro k4000). This software can use full GPU and CPU power which literally –really literally- cuts your rendertimes in half or on a 4th of the time it would normally take for all their competitors like Arnold or Vray. But still I relied on the “passes” system earlier described, and more speed was needed.

Until 2015 Unity and Unreal engine appeared in the map as an alternative for GPU “free renderer”, and the glorious realtime graphics they were displaying caught my attention on an archviz interactive demo where you could touch objects on the appartment and they would react “normally” to lighting conditions and physical properties “naturally”. If you closed the windows, the light would scatter through the floor with no over exposition like you would expect from a 3d render with light bounces. So I decided to try Unreal engine first, and I fought with materials so much with blueprint, I looked into a solution. That´s how I met Substance Painter and Substance designer. At the time I was using Modo, and I was blown away for the interactive possibilities that a shader generated from substance designer could do. But nevertheless everything substance designer generated, was read accordingly into any software that got the plugin (free) to accept those materials. The scope of Allegorithmic´s Substance Designer is very broad, and I won´t be covering it on this post. The only thing I´ll say it´s : now it is a standard on the industry.

With 2015 on play, Vray had developed (since version 2) an RT production viewport for 3DsMax, and more demos were available for Maya and Modo. But everyone got a surprise wen Redshift was nominated and won CG awards for “BEST RENDERING SOFTWARE OF THE YEAR 2015” defeating anything else out there on the cpu and GPU render market.

Year 2016 is about to finish and Unity and Unreal engine shifted their efforts to accept alembic files so studios would choose their softwares as a free alternative for direct realtime render for 3d scenes and animation.

Unity and Unreal are leading the videogame and realtime solutions for many studios. Here you can see a list of current works:

Realistic RT production is hard to achieve with the right looks but there are:

All graphic cards that specialize on GPU force, have CUDA cores, which are basically graphic processors that can ingest a lot of polygons on the screen at the same time and also display the surface properties of the material in a realistic way using a variety of pre-generated maps for math calculation of the light.

The RT race started with Unreal and Archviz since 2014, but now it completely has overtaken the productivity pipeline for studios, accelerating quality and speed on the CG competitive arena on the mentioned softwares and software renderers. Maxwell and Clarisse can really team up to bring fantastic results with Allegorithmic´s Substance Painter. Katana is one huge competitor for compositing and realtime previz / final render and widely known among the Foundry´s users.

Who will dominate the race for the 2017 CG realtime workflow? We have yet to decide as better graphic cards are along the January-March cycle; boosting performance in everyway for free and paid GPU solutions for rendering.

Thank you for reading this article. This review got a little broad with many more things left to be said, but I hope you´ve liked it. If so, please share it on your nets.:

Share your thoughts here.

This site uses Akismet to reduce spam. Learn how your comment data is processed.