Rasterizer
mental ray supports a first-hit rendering algorithm called
rasterizer. It allows to achieve superior motion blur
speed compared to the traditional scanline mode. The primary difference is
the separation of the shading sample phase from final sample compositing
(also called sample collection). This allows to tune for anti-aliasing and
motion blur image quality independently of the shading computation. Without
the rasterizer, mental ray is always shading each of the spatial and temporal
sample points (eye rays) selected within the shutter interval. As a
consequence, an increase of the anti-aliasing quality requires proportional
raise of the shading effort.
The rasterizer works by shading objects at certain spatial and temporal
sample positions, and caching these shaded samples. If an object moves, the
shaded sample results can be re-used at each point of the motion blur path.
The cache is tied to the geometry:
-
a number of sample points is selected on each triangle based on
the visibility, size, and orientation of the triangle. These points
are then individually shaded by calling the material shaders as usual.
The results are stored in front-to-back order for later combination.
Care is taken to minimize the shading of points hidden behind other
geometry, but although rendering proceeds in a roughly front-to-back
order, there is no guarantee of the exact order, unlike for the regular
scanline algorithm or ray tracing. For this reason, only the surface
shading and transparency are stored initially, and volume and environment
shading are calculated later on.
-
The tile is scanned, and all front-to-back stored shading results are
composited to individual screen samples, using their opacities to combine
their colors, and the volume and environment shaders are called and combined
with the surface shading.
The late compositing of shading samples to form screen samples, and re-using
of shading results has several important consequences:
-
If the material shader traces rays with shader API functions like
mi_trace_reflection, the result is
re-used at all points the object moves along. This has the effect that the
object appears to "drag" reflections and
refractions with it. For example, if a
mirror that is coplanar to the image plane moves sideways, its edges are
always blurred, but the objects being reflected would be blurred only with
the rasterizer.
-
Transparency (mi_trace_transparent)
can be calculated by the regular scanline algorithm without tracing rays, by
following the chain of depth-sorted triangles behind the current point on the
image plane. Since the rasterizer shades points on triangles one by one, and
combines the results according to depth only later at the compositing stage,
mi_trace_transparent will always
return false and transparent black. As long as the shader does not do
non-standard linear compositing, this gives the same results. But if the
shader makes decisions such as casting environment rays based on the value
returned by mi_trace_transparent,
the results may differ from expectations.
-
In particular, shaders that implement matte objects will
not work without modification. Matte objects are placeholders for later
compositing outside of mental ray; like transparent cut-outs where live
action will be added later. Since the rasterizer ties its shading sample
combining to the alpha component of the RGBA color returned by the material
shader, it will fill in such cut-outs. To avoid this, a shader may use the
new mi_opacity_set function to
explicitly set the opacity for the compositing stage independently of the
returned alpha value. In other words, if an explicit opacity value is set,
the alpha channel of the shading result color is ignored for calculating
transparency, and is just retained for writing to the final output buffer.
Instead, the opacity color is used to combine shading values front-to-back,
whereas in the absence of the opacity color, alpha is used to combine shading
samples front-to-back. A matte object could have alpha of 0 but set an opacity
of 1. In this manner one can render solid objects with transparency built in,
for correct results during later, external compositing. There is also an
mi_opacity_get function to support
cooperating shaders in Phenomena.
The rasterizer can be enabled with a scene
option or on the command line of a
standalone mental ray. mental ray controls the pixel over-sampling with
the samples collect,
which gives the number of samples per pixel-dimension. For example, the
default value of 3 gives 9 samples per pixel. The rate of shading is
controlled independently with
shading samples,
and defaults to 1.0, or roughly 1 shading call per pixel. Note, that this
drives the internal tessellation depth, and takes effect after the geometry's
own approximation has been calculated. It is possible to override the shading
samples either per object or per
instance.
For further acceleration of motion blur rendering, the shading frequency
can be reduced for fast-moving objects by tuning the motion factor
in the scene options or on the
command line of a standalone
mental ray.
This is a positive floating point value which divides the number of shading
samples taken, proportional to the speed of the motion of the object in screen
space. The default value is 0.0, which disables this feature. A good starting
value for many scenes is 1.0. Higher values will further reduce the number of
shading samples thus raise performance for fast moving objects.
Due to the different sampling patterns it should be avoided to use
sample passes rendered with the
rasterizer together with passes that do not use the rasterizer.
Copyright © 1986, 2015
NVIDIA ARC GmbH. All rights reserved.