declare shader "mip_gamma_gain" (
color "input",
scalar "gamma" default 1.0,
scalar "gain" default 1.0,
boolean "reverse" default off,
)
apply texture, environment, lens
version 1
end declare
This is a simple shader that applies a gamma and a gain (multiplication)
if a color. Many similar shaders exists in various OEM integrations of
mental ray, so this shader is primarily of interest for standalone
mental ray and for cross platform phenomena development.
If reverse is off, the shader takes the input,
multiplies it with the gain and then applies a gamma
correction of gamma to the color.
If reverse is on, the shader takes the input,
applies a reverse gamma correction of gamma to the color,
and then divides it with the gain; i.e. the exact inverse of
the operation for when reverse is off.
The shader can also be used as a simple gamma lens shader, in which
case the input is not used, the eye ray color is used instead.
Render Subset of Scene
This shader allows re-rendering a subset of the objects in a scene,
defined by material, geometric objects, or instance labels. It is
the ideal "quick fix" solution when almost everything in a scene
is perfect, and just this one little object or one material needs
a small tweak1.
It is applied as a lens shader and works by first testing which
object an eye-ray hits, and {only if the object is part of the
desired subset is it actually shaded at all}.
Pixels of the background and other objects by default return
transparent black (0 0 0 0), making the final render image ideal
for compositing directly on top of the original render.
So, for example, if a certain material in a scene did not turn out
satisfactory, one can simply:
Modify the material.
Apply this lens shader and choosing that material.
Render the image (at a fraction of the time of re-rendering the whole image).
Composite the result on top of the original render!
An example of using mip_render_subset on one material
Naturally, only pixels which see the material "directly" are
re-rendered, and not e.g. reflections in other objects
that show the material.
The shader relies on the calling order used in ray tracing and does not work
correctly (and yields no benefit in render time) when using the rasterizer,
because the rasterizer calls lens shaders after already shading the
surface(s).
declare shader "mip_render_subset" (
boolean "enabled" default on,
array geometry "objects",
array integer "instance_label",
material "material",
boolean "mask_only" default off,
color "mask_color" default 1 1 1 1,
color "background" default 0 0 0 0,
color "other_objects" default 0 0 0 0,
boolean "full_screen_fg" default on
)
apply lens
version 5
end declare
enabled turns the shader on or off. When off, it does nothing,
and does not affect the rendering in any way.
objects, instance_label and material are the
constraints one can apply to find the subset of objects to shade.
If more than one constraint is present, all must be fulfilled, i.e.
if both a material and three objects are chosen, only the object that
actually have that material will be shaded.
If one do not want to shade the subset, but only find where it is on
screen, one can turn on mask_only. Instead of shading the
objects in the subset, the mask_color is returned, and no
shading whatsoever is performed, which is very fast.
Rays not hitting any objects return the background color,
and rays hitting any object not in the subset return the
other_objects color.
Finally, the full_screen_fg decides if the FG preprocess
should apply to all objects, or only those in the subset. Since FG
blends neighboring FG samples, it is probable that a given object
may use information in FG points coming from nearby objects
not in the subset. This is especially true if the objects are
coplanar. Therefore it is advised to let the FG pre-pass "see" the
entire scene.
Naturally, turning off this option and creating FG points only for
the subset of objects is faster, but there is a certain risk
of boundary artifacts, especially in animations. If the scene uses
a saved FG map, this option can be left off.
Binary Proxy
This shaders allows a very fast way to implement demand loaded geometry.
It's main goal is performance, since it leapfrogs any form of translation
or parsing by directly writing to a binary file format which can be
sucked directly into RAM at render time. There are many other methods
to perform demand loading in mental ray (assemblies, file objects,
geometry shaders, etc.) but this may require specific support in the
host application, and generally involves parsing or translation steps
that can impact performance.
To use it, the shader is applied as a geometry shader in the scene. See
the mental ray manual about geometry shaders.
object_filename is the filename to read (or write). By convention,
the file extension is ".mib" for "mental images binary".
The shader has a "read" mode and a "write" mode:
When the boolean write_geometry is on and the geometry
parameter points to an instance of an existing scene object, this object
will be written to the .mib file named by object_filename.
When write_geometry is off, the geometry parameter is
ignored (not used). Instead, a mental ray placeholder object is created
by the shader which contains a callback that will load the actual geometry
from the file on demand (generally when a ray hits it's bounding box,
although mental ray may choose to load it for other reasons at other times).
The meter_scale allows the object to be interpreted in a unit
independent way. If this is 0.0, the feature is not used. When used,
the value should be the number of scene units that represent
one meter, i.e. if scene units are millimeters this would be 1000.0
etc.
When writing (write_geometry is on), this value is simply
stored as meta data in the .mib file. When reading (write_geometry
is off) the object is scaled by the ratio of the value stored in the file
and the value passed at "read time", to account for the difference in
unit, if any.
The flags parameter is for algorithm control and should in most
cases be left to 0. It is a bit flag, with each bit having a specific
meaning.
Currently used values are:
[1] Force use of "assemblies" rather than "placeholders". These are
two slightly different internal techniques that mental ray uses to
demand-load objects. See the mental ray manual for more information.
Note that assemblies only work with BSP2 acceleration, and that multiple
instances of the same assembly cannot have different materials or
object flags applied. This limitation does not exist for placeholders.
[2] "Auto-assemblies" mode. The shader uses assemblies if BSP2 acceleration is
used, placeholders if not.
[4] Do not tessellate the object before writing it. The object is written in it's
raw format. The object must already be a mental ray primlist (miBox) for this
to work. When this bit is set, displacement will not be baked to the file.
When it is not set (the default) displacement will be baked2
All other bits should be kept zero, since they may become meaningful in future versions.
FG shooter
This shader is used to "shoot" finalgather (FG) rays into the scene in a determined
way, forcing to place FG points at specific locations irrespective of mental ray.
Normally, without this shader, during mental ray's final gather precomputation stage
eye rays are shot through the camera into the scene, and FG points are placed in the
scene according to the current view of the render camera. One advantage is, that the
density of FG points is relative to image space, thus automatically adaptive to the
visible areas of interest.
However, in camera animations the location of FG points will actually move along with
the camera. This can potentially cause artifacts in certain situations - especially
if the camera moves slowly, or only moves a little (e.g. in a small pan,
dolly, tilt, truck or crane move).
The "FG Shooter" shader exists to safeguard for this eventuality; it allows using
one or more fixed transformation matrices as the root location from which to "shoot"
eye rays during the final gather precomputation phase only. This guarantees
that even if the actual render camera moves the FG points will keep their positions,
as determined by the given matrix or matrices.
declare shader "mip_fgshooter" (
integer "mode",
array transform "trans"
)
version 1
apply lens
end declare
The trans parameter contains an array of transformation matrices, defining
how the eye rays are shot during the final gather prepass. Instead of using the camera
to calculate the "eye" rays, they are shot from the 0,0,0 point, mapping the screen
plane to the unit square surrounding the point 0,0,-1 and then transformed by the
passed matrix (or matrices) into world space.
The mode parameter defines how the matrix (or matrices) passed are displayed
during the final gather prepass. Since how it is displayed impacts the
adaptivity, this has some functional implications.
0 breaks the render into "subframes��, each containing the image as seen from
a given position. This requires a higher final gather density to
resolve the same number of details, but gives the best adaptivity.
1 stacks the different results on top of each other. This does not require
any additional density, but the final gather adaptivity does not work as well.
2 works like 1, but only visibly displays one pass (but all the others are calculated
exactly the same)
Footnotes
1
And the client is on his way up the stairs.
2
Note
that when baking displacement, a view-dependent approximation can not be used.
This is because there is no view set up at the time when this shader executes,
so the resulting tessellation will turn out very poor.