Here we describe how the layering shaders write to user framebuffer passes, providing a different mechanism from that used in the past with the mia_material_x shader. In part, this may inform a given integration how to encourage a simpler pipeline methodology. So in this page, we explain why this new way can be more suited to production use, revealing the drawbacks of the older methods.
For minimal setup, specify specially-named framebuffers for the camera. Simply create the framebuffers using the standard names. Here is an example in scene description language of how to specify the framebuffers:
framebuffer "indirect_diffuse" datatype "rgba_16" filtering true filename "passes.exr" compression "rle" premultiplied true user true useopacity true attribute string "LPE" "L.+<RD>E" framebuffer "direct_diffuse" datatype "rgba_16" filtering true filename "passes.exr" compression "rle" premultiplied true user true useopacity true attribute string "LPE" "L<RD>E" ... ... etc repeated for all the buffers ...s
These are the framebuffer names and their meaning:
In a sample scene with a few layers, we place a simple transparent sphere in front of an opaque robot. In this case, the transparency conceptually represents a thin glass sphere, as opposed to a solid glass sphere, which would be better implemented with a specular transmission component.
In general, the shader on the topmost object outputs values to the framebuffers. With opaque objects, this is visually clear across diffuse and reflection passes. However, with a transparent object, the render will execute shaders on background objects. So, the key production question is to which framebuffer pass or passes will the background object contribute?
In a simple implementation the topmost shader will output the values to a separate "transparency" pass with the full results, ie, beauty pass, of what is behind the topmost object. It may look like this:
Here we see the various outputs of mia_material_x. We have the surfaces split into indirect and direct diffuse, specular (direct specular or highlight) and reflection (indirect specular), so that the apearance of these surfaces can be tuned in post production. But the blue sphere appears completely opaque in each of these.
The objects that have transparency, like this blue sphere, simply include the objects behind them in the transparency output of the shader. This transparency output is no longer split into diffuse, specular, etc. parts, but comes in wholesale, as the final composited thing like a beauty pass for the background objects.
Advantages
Disadvantages
Overall, we lack control across objects; what is seen in transparency will not "follow along" as reweighting may be applied to other additive passes in the compositing stage. Transparency is altogether independent with no control over separate light passes.
There have been framebuffer shaders developed in the past which can address this issue for those willing to build more complex shading networks. In fact, we have developed and tested several versions, though their usage may have been difficult to support. However, we believe we can simplify some of the drawbacks, in both ease-of-use and performance, by incorporating framebuffer support into the layering functionality of this library.
In the layering library, we offer output writing based on knowing which elemental component is being executed. In terms of Light Path Expressions, the elemental shaders offer identification of the component interaction stage before the eye, and often whether the incoming light for that interaction is direct or indirect. For example, "L<RD>E" can be identified by direct light hitting inside mila_diffuse_reflection. Indirect light would be represented by "L.+<RD>E"
We could have a transparency output as "L.*<TS>E" That could also have contribution from specular transmission. It is also possible to separate out diffuse, glossy, and specular interactions that get hit from the eye through transparency. A compositor could think of the various sub-outputs as ready to add into the individual outputs directly seen by the eye (not through transparent objects).
Advantages
Disadvantages
If one still wonders about this style of transparency behavior, think about how motion blur behaves:
In the following example, rather than making the blue sphere slightly transparent, we simply make it move so fast it is becomes partially transparent by the virtue of being motion blurred. This is the result popping out even with the older shaders behavior:
The individual framebuffers would then look like this, without even having to use the layering shaders to render them:
As we see, now the exact result was automatically done by the renderer! Because when it collapsed samples into pixels, the result was exactly what one expects.
No "transparency" output on even makes sense in this case - the object doesn't even have transparency. It just appears this way due to speed.
So with the layering shaders transparency capabilities, one can accomplish the same thing. Compositing is easier, control is greater, the end-result less prone to compositing artifact.
It is important to note that if the rasterizer is used, the useopacity flag must be true, since it is then the responsibility of the rasterizer to composite final samples into framebuffer pixels.
One thing to note is that this behavior makes complete sense for all outputs that are mathematically additive sub-components of light. It does not make sense for a lot of other different types of outputs, like:
The layering shaders can support the former by having special "extra outputs" on the material root node mila_material.
The second issue, though, is mathematically impossible to solve; and is explained in the next section:
The mia_material shader has all sorts of outputs that are of type 'raw', 'level' and 'result', where in general
Now, why can't these shader have all these outputs, and mix them all nicely?
Well, the answer is that is not mathematically possible to recreate the mix once the oasses have been turned into pixels.
Assume we have two diffuse materials, A and B, and we want to mix 80% of A with 20% of B. Assume further these shaders have "result", "raw" and "level" outputs, in which
A.result = A.raw * A.level B.result = B.raw * B.level
Mixing for the FINAL result we have:
final.result = A.result * 80% + B.result * 20%
...but we can't mix the other outputs. For example:
final.raw = A.raw * 80% + B.raw * 20% final.level = A.level * 80% + B.level * 20%
...then
final.raw * final.level = ((A.raw * 80%) + (B.raw * 20%)) *((A.level * 80%) + (B.level * 20%)) final.raw * final.level = (A.result * 80%) * 80% + (B.result * 20%) * 20% + more multiplying terms ... final.result != final.raw * final.level
...because now the weight is applied both to "raw" and "level" and multiplying them together will effectively apply the weights squared... not matching the multiplication of the individual "result" outputs any more!
Finally, note that as a pixel is filtered, many final results from different samples in the pixel are weighted and mixed. This compounds the complexity of the math even more. And will affect even simple passe at object edges. Often one may see darkened edges on objects, or other artifacts, as a result of using such passes. If we use passes that only add various elemental amounts of light to create the final picture, we avoid this issue completely. Use additive only passes to avoid element recombining issues during compositing.