Forward vs. Deferred Rendering: why/when forward rendering still matters
If you are familiar with the rendering pipeline you probably already heard the terms forward and deferred rendering. They basically mean different ways to shade - calculate the final surface color - of the objects in your three-dimensional scene.
Forward rendering was the main (and probably unique) algorithm used in games for shading even after the introduction of the programmable rendering pipeline. Deferred rendering only made its way to the games in late 2008, where the first AAA game to rely on it was S.T.A.L.K.E.R for PC. Nowadays, there are many articles about deferred rendering and how it’s being implemented in different games, and with all that fuzz, forward rendering starts to look like a last-gen technique.
In this article I will give a brief overview of forward and deferred rendering and show some reasons why forward rendering still matters.
Forward Rendering Overview
Imagine that you have a scene with hundreds of objects and some light sources. How would you shade the objects in that scene?
Using a forward render you would pick each object in the scene separately, in any particular order, and then calculate its surface color based on its material and all the lights that affect this object. This would give you a shading complexity of O(geometries_surface_pixels * lights).
As you can see, the shading performance is dependent on the number of objects that you have and the area covered by their surface. This happens because there’s usually more than one object being rendered to a portion of the screen, so you might shade a screen pixel multiple times (one time for each object that covers that pixel). Therefore, performance might be lost due to unnecessary shading of screen pixels that are going to be replaced/discarded.
Another issue that needs to be solved in a forward render is how to handle all the combinations of geometry types (static, morph, skinned, etc), object materials and light sources (spot, point, directional, etc). For example, each object might be lit by an arbitrary number of arbitrary light sources. So, how would you handle the rendering of that object?
There’s basically two ways to handle the above problem: single pass or multiple pass rendering. Using multiple pass rendering you would create a small shader for each geometry-material-light combination, and then render each object multiple times using additive blending. Using a single pass rendering you would create a large (uber) shader that handles any geometry-material-light combination. You might do that using dynamic branching inside the shader or using pre-processor defines to compile different versions of the shader.
Now that you have a grasp on how a forward render works, let’s have a look at a deferred render.
Deferred Rendering Overview
Imagine our example scene one more time, with hundreds of objects and some light sources. Again, how would you shade the objects in that scene?
Using a deferred render you would first pick each opaque object in the scene separately, in any particular order, extract any geometry data that you would like to use while shading your objects, and then, store it into multiple screen-space buffers using MRT (Multiple Render Targets). Note that we are not using the word “render” to describe the former process because we are not calculating the final pixel color.
Finally, you would traverse each screen-space pixel and shade it, using the geometry data stored in the screen-space geometry buffers and the scene light sources. This would give you a shading complexity of O(screen_pixels * lights).
As you can see, in a deferred render your shading performance is independent from the number of objects in the scene or their surface size and usually easier to predict. However, note that your rendering performance will be affected by your geometries because you do have to “render” them to generate the geometry buffers. That said, the shading is usually the most expensive part in the rendering pipeline, so it’s the one that you would like to minimize.
In a deferred render your input geometry data is fixed (encoded in the geometry buffers), and you usually use the same shading algorithm for all every screen pixels. However, you might still have an arbitrary number of arbitrary light sources affecting different parts of your screen that you would have to deal with.
Lastly, note that a conventional deferred render cannot handle transparent geometry because it only stores data from the nearest geometry.
Forward or Deferred? Which one should I choose?
So let’s try to come up with a few reasons why we would like to do forward rendering instead of deferred rendering in consoles:
- Old hardware/consoles/GPUs don’t support MRT. (unsolvable)
- We would still need a forward render to handle transparent objects. (unsolvable)
- Deferred render requires a high bandwidth which can be a problem even on current-gen consoles with 128 bit GPU bus. (solvable)
- There’s no support for MSAA. (solvable)
If you consider the first and second assertions as facts, you would conclude that even with a deferred render you still need to maintain a forward render for old hardware and transparent objects. This basically means maintaining two separated pipelines and making them look the same.
Another point of discussion is that, even though you can handle many dynamic lights with deferred rendering, you are still very limited in the number of lights that can cast shadows. Therefore, you might be able to place many lights in your indoor environment but only one or two of them would cast shadows.
Lastly, I really like to look at some research demos where people show scenes being lit by thousands of dynamic lights at the same time but does our game scenes require that many lights? Would that make the experience any better for the player?
Let’s finish this article talking about two games: COD: MW2, which uses a forward render, and COD: Black Ops, which according to the developers uses a deferred render (I’m not comparing Infinity Ward with Treyarch here). In that case, according to many reviewers, the first one presented more photo-realistic graphics than the second one. And if you played Black Ops you probably noticed that the way dynamic lights lit geometries is very limited, so what’s the point in using deferred rendering?
Considering all those items I would say that depending on the game a forward render might still be the best way to go in the current generation.