Framebuffers

Introduction

Here we describe how the layering shaders write to user framebuffer passes, providing a different mechanism from that used in the past with the mia_material_x shader. In part, this may inform a given integration how to encourage a simpler pipeline methodology. So in this page, we explain why this new way can be more suited to production use, revealing the drawbacks of the older methods.

Using layering shaders with framebuffers

For minimal setup, specify specially-named framebuffers for the camera. Simply create the framebuffers using the standard names. Here is an example in scene description language of how to specify the framebuffers:

framebuffer "indirect_diffuse"     
    datatype "rgba_16" 
    filtering true 
    filename "passes.exr"
    compression "rle" 
    premultiplied true 
    user true 
    useopacity true
    attribute string "LPE" "L.+<RD>E"
framebuffer "direct_diffuse"      
    datatype "rgba_16" 
    filtering true 
    filename "passes.exr"
    compression "rle" 
    premultiplied true 
    user true 
    useopacity true
    attribute string "LPE" "L<RD>E"
    ... 
    ... etc repeated for all the buffers
    ...s

These are the framebuffer names and their meaning:

direct_diffuse
The diffuse direct lighting. The light travels in a path from the light (L) to the diffuse reflective (<RD>) surface to the eye (E).
indirect_diffuse
The (diffuse) indirect lighting of the scene. The light may hit any number of surfaces. reflecting or transmitting, before the final diffuse surface leading to the eye.
direct_glossy
The glossy direct lighting. The light travels in a path from the light to the glossy reflective surface to eye.
indirect_glossy
The glossy reflections not coming directly from lights. In general, the traditional highlight and reflection passes are not exactly the direct and indirect reflections, but rather the direct light loop and ray traced reflection rays. This means that visible area lights could end up in the traditional reflection pass rather than the highlight. Using the Light Path Expression model, we more strictly adhere to direct vs. indirect light path specification.
direct_specular
The specular direct lighting (from light to specular surface to eye). These are direct light reflections from mirror-like surfaces
indirect_specular
The specular reflections not coming directly from lights, but rather from objects.
diffuse_transmission
Translucency effects. Both direct and indirect.
glossy_transmission
The refractive blurred transmission. Both direct and indirect.
specular_transmission
The refractive un-blurred (perfect-mirror) transmission. Both direct and indirect.
front_scatter
The "front" facing subsurface effect.
back_scatter
The "back" facing subsurface effect.
emission
Any "additional-light"/self-illumination/incandescence effects.
the mila_adapter shader.

An example

In a sample scene with a few layers, we place a simple transparent sphere in front of an opaque robot. In this case, the transparency conceptually represents a thin glass sphere, as opposed to a solid glass sphere, which would be better implemented with a specular transmission component.

Beauty Render

Image:framebuffer-final-render.jpg
Here is an example scene containing various materials. The blue sphere near the
front is using "transparency" (as opposed to specular transmission) to show part of the scene behind itself.

Frambuffer passes using a simple traditional mechanism

In general, the shader on the topmost object outputs values to the framebuffers. With opaque objects, this is visually clear across diffuse and reflection passes. However, with a transparent object, the render will execute shaders on background objects. So, the key production question is to which framebuffer pass or passes will the background object contribute?

In a simple implementation the topmost shader will output the values to a separate "transparency" pass with the full results, ie, beauty pass, of what is behind the topmost object. It may look like this:

Image:framebuffer-current-behavior.jpg

Here we see the various outputs of mia_material_x. We have the surfaces split into indirect and direct diffuse, specular (direct specular or highlight) and reflection (indirect specular), so that the apearance of these surfaces can be tuned in post production. But the blue sphere appears completely opaque in each of these.

The objects that have transparency, like this blue sphere, simply include the objects behind them in the transparency output of the shader. This transparency output is no longer split into diffuse, specular, etc. parts, but comes in wholesale, as the final composited thing like a beauty pass for the background objects.

Advantages

Disadvantages

Overall, we lack control across objects; what is seen in transparency will not "follow along" as reweighting may be applied to other additive passes in the compositing stage. Transparency is altogether independent with no control over separate light passes.

There have been framebuffer shaders developed in the past which can address this issue for those willing to build more complex shading networks. In fact, we have developed and tested several versions, though their usage may have been difficult to support. However, we believe we can simplify some of the drawbacks, in both ease-of-use and performance, by incorporating framebuffer support into the layering functionality of this library.

Frambuffer passes using the layering shaders

In the layering library, we offer output writing based on knowing which elemental component is being executed. In terms of Light Path Expressions, the elemental shaders offer identification of the component interaction stage before the eye, and often whether the incoming light for that interaction is direct or indirect. For example, "L<RD>E" can be identified by direct light hitting inside mila_diffuse_reflection. Indirect light would be represented by "L.+<RD>E"

We could have a transparency output as "L.*<TS>E" That could also have contribution from specular transmission. It is also possible to separate out diffuse, glossy, and specular interactions that get hit from the eye through transparency. A compositor could think of the various sub-outputs as ready to add into the individual outputs directly seen by the eye (not through transparent objects).

Advantages

Disadvantages

Framebuffers, the way motion blur does them

If one still wonders about this style of transparency behavior, think about how motion blur behaves:

In the following example, rather than making the blue sphere slightly transparent, we simply make it move so fast it is becomes partially transparent by the virtue of being motion blurred. This is the result popping out even with the older shaders behavior:

Image:framebuffer-motionblur-rendering.jpg

The individual framebuffers would then look like this, without even having to use the layering shaders to render them:

Image:framebuffer-motionblur-behavior.jpg

As we see, now the exact result was automatically done by the renderer! Because when it collapsed samples into pixels, the result was exactly what one expects.

No "transparency" output on even makes sense in this case - the object doesn't even have transparency. It just appears this way due to speed.

So with the layering shaders transparency capabilities, one can accomplish the same thing. Compositing is easier, control is greater, the end-result less prone to compositing artifact.

It is important to note that if the rasterizer is used, the useopacity flag must be true, since it is then the responsibility of the rasterizer to composite final samples into framebuffer pixels.

The Limit of what is Mathematically Possible

One thing to note is that this behavior makes complete sense for all outputs that are mathematically additive sub-components of light. It does not make sense for a lot of other different types of outputs, like:

The layering shaders can support the former by having special "extra outputs" on the material root node mila_material.

The second issue, though, is mathematically impossible to solve; and is explained in the next section:

Additive Passes

Why can we only mix "result" outputs?

The mia_material shader has all sorts of outputs that are of type 'raw', 'level' and 'result', where in general

Now, why can't these shader have all these outputs, and mix them all nicely?

Well, the answer is that is not mathematically possible to recreate the mix once the oasses have been turned into pixels.

Assume we have two diffuse materials, A and B, and we want to mix 80% of A with 20% of B. Assume further these shaders have "result", "raw" and "level" outputs, in which

  A.result = A.raw * A.level
  B.result = B.raw * B.level

Mixing for the FINAL result we have:

  final.result = A.result * 80% + B.result * 20%

...but we can't mix the other outputs. For example:

  final.raw    = A.raw   * 80% + B.raw   * 20%
  final.level = A.level * 80% + B.level * 20%

...then

  final.raw * final.level = ((A.raw * 80%) + (B.raw * 20%)) *((A.level * 80%) + (B.level * 20%))
  final.raw * final.level = (A.result * 80%) * 80% + (B.result * 20%) * 20% + more multiplying terms ...

  final.result != final.raw * final.level

...because now the weight is applied both to "raw" and "level" and multiplying them together will effectively apply the weights squared... not matching the multiplication of the individual "result" outputs any more!

Finally, note that as a pixel is filtered, many final results from different samples in the pixel are weighted and mixed. This compounds the complexity of the math even more. And will affect even simple passe at object edges. Often one may see darkened edges on objects, or other artifacts, as a result of using such passes. If we use passes that only add various elemental amounts of light to create the final picture, we avoid this issue completely. Use additive only passes to avoid element recombining issues during compositing.