Cameras

camera "name"
    [frame_buffers]
    [output_statements]
    [pass_statements]
    [camera_parameters]
end camera  

A camera provides parameters to describe how to project and map a view of the scene to the final 2D image (or film). It can be used to specify optional frame buffers and output statements or pass statements for further manipulations of samples or pixels with the help of custom shaders.

A camera is always fixed at the origin, looking down the negative Z axis, with up being the positive Y axis, in its own coordinate space called camera space. A camera can be placed anywhere in world space using an instance transformation, in the same way objects are placed in world space. (For backwards compatibility with previous mental ray versions, there is also a mode where camera space and world space are always identical.) Note, that the camera instance must be attached to the root instance group of the scene.

Frame Buffers

Cameras can contain frame_buffers which list named frame buffers and various properties associated to them, like an image file name. Such statements is basically a request for creation of a frame buffer with a specific data type in mental ray. Further optional attributes control advanced properties of the buffer, like mark it for output to an image file with certain options, which is performed by mental ray after rendering and output shading has finished.

The frame_buffers section may provide any number of the following statements:

framebuffer "name"
    [ datatype "datatype" ]
    [ filtering <bool> ]
    [ filename "filename" ]
    [ filetype "filetype" ]
    [ colorprofile "color profile" ]
    [ compression "compression" ]
    [ quality "quality" ]
    [ dod <bool> ]
    [ dpi <int> ]
    [ field "fieldname" ]
    [ premultiplied <bool> ]
    [ primary <bool> ]
    [ tiled <bool> ]
    [ user <bool> ]
    [ useopacity <bool> ]

The name of a frame buffer can be arbitrary. It is used for any further references in the scene description. Any following framebuffer statement that is using the same name will refer to the same frame buffer, and may be used to incrementally change its properties.

datatype "datatype"
Specifies the data type of the buffer and, if enabled for file output, the data type in the target image file. If the data type is not directly supported by the file format then a conversion will be attempted for compatible types, like between different color types.
filtering on|off
Controls if image filtering or interpolation will be applied to the specified buffer. This corresponds to the "+" or "-" prefix of the data type string in old-style file output statements.
filename "filepath"
Write the buffer content to the file filepath after all output shader are applied. Note, that the filepath may be further manipulated by global substitution rules or overrides, like from the command line of standalone mental ray. Other file related attributes on the same framebuffer can be added to control more details of the target file format.
Note Multiple frame buffers may be written to a single multi-layer file if the target file format supports layers. The same filepath should be used on the respective frame buffers.
filetype "filetype"
For file output, the filetype enforces the target file format. The default is RLA.
colorprofile "colorprofile"
For file output, the given colorprofile is applied to the color pixels before writing to the file.
compression "compression"
For file output, set a compression type. This is currently only supported for the OpenEXR file format. The available compression modes are "none", "rle", "piz", "zip", and "pxr24", see OpenEXR Compression for details.
quality qualityint
For file output, set a compression quality in the range 1100. A value of 0 selects the built-in default. This is currently only supported for the JFIF/JPEG file format.
dod on|off
dpi dpiint
field "fieldname"
For file output, set custom attributes dpi, dod, and field. The dpi attribute will be stored only in files of format IFF, TIFF, or JFIF/JPEG. The dod attribute is special to the IFF format. The values for these attributes are not computed by mental ray but simply passed through to file output as specified in these statements.
premultiplied on|off
This property is used by the rasterizer only, and only if sample compositing is enabled with the useopacity attribute for this frame buffer, or for all frame buffers using the global rast useopacity global option. If set to on then pre-multiplied color values are assumed when adding two color values at the same sample position. If this flag is not set then the behavior of previous mental ray versions is used to retain backwards compatibility, which was always off for user frame buffers and on for standard frame buffers.
primary on|off
This flag marks the default color frame buffer for mental ray which is used to drive oversampling based on color contrast, for example. The output file name and format overrides from the command line of a standalone mental ray will be applied to this buffer. If no frame buffer has been marked as primary mental ray will select the first non-user color frame buffer by default.
Note This flag should be set on a single frame buffer only.
tiled on|off
For file output, determines if tiling should be used when storing the frame buffer to an image file. This is useful if the resulting image file is meant to be used as a texture in a later rendering. The default is off for not tiled.
user on|off
This attribute marks the frame buffer as custom when set to on. Such buffers will not be filled by mental ray. However, they can be enabled for image file output like the standard buffers. Such buffers are typically needed for custom shader solutions, like intermediate buffers for compositing operations in an output shader. A user buffer with a unique name will always be created and kept separately, and is not considered for data sharing optimizations like standard buffers.
useopacity on|off
This attribute is used by the rasterizer only. When set to on on any user color frame buffer then compositing of shading samples will based on the opacity value of the primary buffer. This assumes that the user buffer has been filled with shading samples by custom shaders, for example. A global option may be used to enable this for all user color frame buffers at once.

By default, if the related attributes have not been given, a frame buffer is: of datatype "rgba" with filtering on, is not primary nor user, and has neither filetype nor filename set (thus, is not written to file).

The frame buffers maintained in mental ray are separated into standard and user buffers, controlled by the optional user attribute.

Note The standard frame buffers are eligible for data sharing. Which means, frame buffers of the same or easily convertible data type but with different name may actually point to the same internal buffer, to optimize memory usage at the cost of a small runtime conversion overhead. For example, requesting both a floating-point and 8-bit color frame buffer that are not marked as user will create a single floating-point color buffer only, and convert the pixel data to 8 bits on demand, like when requested for file output.

Here are some examples for common cases of frame buffer usage:

framebuffer "main"
    datatype "rgba"
    filetype "tif"
    filename "output.tif"

framebuffer "depth"
    datatype "z"

Request a standard color buffer with alpha channel of 8 bits per component and the standard z-depth buffer. Both buffers will be filled by mental ray, and the color buffer will be written to an image file of format TIFF.

framebuffer "main"
    datatype "rgba"
    filetype "exr"
    filename "output.exr"

framebuffer "depth"
    datatype "z"
    filetype "exr"
    filename "output.exr"

Request a standard color buffer with alpha channel of 8 bits per component and the standard z-depth buffer. Both buffers will be filled by mental ray, and written as multiple layers into the same output file of format OpenEXR.

framebuffer "main"
    datatype "rgba_16"
    filetype "tif"
    filename "output.tif"

framebuffer "second"
    datatype "rgba_fp"

framebuffer "third"
    datatype "rgb"
    filetype "jpg"
    filename "output.jpg"
    quality 20

Request a standard color buffer with alpha channel in 16 bits per component, a second one with colors stored in floating-point precision, and a third 8 bit buffer without alpha channel. Write the final image as a TIFF file with 16 bit precision, and a JFIF/JPEG image file with poor quality in 8 bit precision and its native data type without alpha. mental ray may actually keep only one floating-point color frame buffer to fulfill this request.

framebuffer "main"
    datatype "rgba"
    filetype "tif"
    filename "output.tif"

framebuffer "comp"
    datatype "rgba"
    filetype "tif"
    filename "composit.tif"
    user on

output "composit_buffers" ("main", "comp")

Request a standard color buffer and a user color buffer of matching data type. The main buffer will be rendered by mental ray. A custom output shader composit_buffers will be called after rendering, to read from the main and write to the comp color buffer. At the end, both the main and the comp buffers will be written to separate image files in TIFF format by mental ray.

Output Statements

Cameras contain output_statements which specify output shader calls, or, in backwards compatible mode of mental ray, define file output to write images to disk.

The following output_statements are supported:

output "shader_name" ( parameters )

This output statement calls an output shader, such as a filter, that may operate on all available frame buffers. The actual buffers to work on may be supplied as parameters, or the shader may automatically detect available frame buffers by name or type. The output shaders are called after rendering has finished but before the marked frame buffers get written to image files by mental ray. All output shaders are called in a sequence, in the exact same order as listed in the camera.

The following output statements are supported for compatibility with earlier versions of mental ray. New applications should use the named frame buffers instead.

output ["datatype"] "shader_name" ( parameters )
output [colorprofile "profile_name"] ["datatype"] "filetype" [options] "filename"

Deprecated The first output statement calls an output shader, such as a filter, that may operate on all available frame buffers. Here, the datatype may be a comma-separated list of types if the shader requires multiple frame buffers. Each type can be prefixed with a "+" or "-" to turn interpolation on or off, which is on by default for the standard color frame buffer and off by default for all others. For example, a shader that filters the standard RGBA image with a filter whose size depends on the distance of objects needs both the interpolated RGBA buffer and the interpolated depth buffer, and would have a data type "rgba,+z". In previous versions of mental ray, the order of all frame buffer declarations was significant. In mental ray , all output shaders are applied before any of the images is written out to disk.

Deprecated The second kind of output statement writes an image to a file named filename, using the file format filetype. Normally, file formats imply a data type, but the defaults can be overridden by naming an explicit datatype. For example, the file type "tif", which stands for a TIFF color image file format, implies the data type "rgba". The data type controls which type of frame buffers mental ray actually creates and maintains during rendering. The optional colorprofile may be used to transform results stored in the color frame buffer to a desired color space before they are written to the output file. This option may only be used if a render color profile had been specified in the options block. Any number of output files can be created, but just the first one can be overridden from the command line of a standalone mental ray.

Deprecated The options specify additional format related parameters. Currently, the "jpg" file format supports one option quality q, where q is an integer value between 0 and 100. Lower values force higher lossy compression resulting in lower image quality. A quality value of 0 will cause the use of the default quality 75. For Softimage "pic" file formats, the options keywords even and odd are available to set the corresponding fields in the file header.

The data type "rgbe" which stores high dynamic range data is normally used for formats that support RGBE data natively ("hdr" or "cth"), but it can also be stored in any format that accepts 8-bit RGBA. This will result in image files that cannot be displayed with standard viewers, but tools exist that can process such data. For example, the following output statement will store RGBE data in an RLA file:

output "+rgbe" "rla" "file1.rla"

Unless there is also a true floating-point buffer ("rgba_fp"), specification of the "rgbe" type will switch mental ray's color frame buffer to RGBE mode because its high dynamic range is considered a superset of regular RGB. This can significantly reduce memory usage for large frame buffers, compared to floating-point frame buffers which are four times as large. Note that RGBE stores no alpha. By default, frame buffers are stored in frame buffer files on disk to allow arbitrary frame buffer sizes and arbitrary numbers of frame buffers.

Deprecated The data types "fb0" through "fbN" refer to user frame buffers 0...N. These user frame buffers are defined in the options statement using frame buffer statements. The actual data type of fb n is the type of frame buffer n. For example, the output statements

output "+rgba" "rgb"  "file1.rgb"
output "fb0"   "ctfp" "file2.ct"

write the standard frame buffer to the image file file1.rgb, and then write the contents of user frame buffer 0 to the image file file2.ct. This assumes that the options block contains a statement that defines user frame buffer 0, such as:

frame buffer 0 "+rgba_fp"

User frame buffers are empty unless some shader writes to them during rendering. Their purpose is to collect nonstandard image data during rendering, and to make the data available for output shading and image file writing.

A special data type "contour" can be specified that enables contour rendering. Special contour output shaders must be specified that pick up the contour information from the contour cell frame buffer and compute a color image, which it can either put into the regular color frame buffer, or composite on top of the color frame buffer. In the latter case, one rendering phase creates a color image with contours. The color frame buffer can then be written to an image file using a regular image output statement. There is also a built-in contour output shader that creates a PostScript file instead of a color image; see section contours for details and examples.

Pass Rendering Statements

Pass rendering is a feature set that allows saving the results of rendering not in the form of image files, but in the form of sample lists. A sample is the result of a single primary eye ray, including all collected frame buffer information. Oversampling creates more than one sample per pixel, which are filtered to compute the pixel. Since pass rendering operates on samples, it has access to the complete set of subpixel information, and does not suffer from pixel aliasing problems as image compositing does. mental ray also supports merging of multiple pass file, sample by sample, to generate the final filtered output images. Finally, mental ray supports pass preprocessing, which is a function with random access to a single pass file to perform operations such as subpixel motion blur postprocessing.

The pass rendering functionality is controlled by the following pass_statements:

pass null

pass [ "types" ] write "filename"

pass merge [ "types" ]
     read [ "filename ", "filename", ... ]
     [ write "filename" ]
     [ function ( parameters ) ]

pass prep [ "types" ]
     read "filename"
     write "filename"
     function ( parameters )

pass delete "filename"

Pass statement lists are similar to output statement lists, which also allow storing and processing data in order.

pass null
Deletes all pass statements defined in the camera. This is useful for incremental changes. It is executed at parsing time, when the scene is defined.
pass
Saves the current sample rectangle to the named file. The current rectangle is initially the rendered rectangle, but it is updated every time a merge shader has run. It is executed every time a rectangle has finished rendering.
pass merge
Merges two or more sample files into the current sample rectangle buffer, and optionally writes the result to another file. If other sample files should be merged with the current rendering result, it is not necessary to first write the rendered samples to a file and then to name that file in the pass merge statement; instead, an empty filename string "" can be used to reference the rendered samples. If the merging function is omitted, mental ray uses a built-in default that supports depth and alpha blending of the main color frame buffer. The pass merge statement is executed every time a rectangle has finished rendering.
pass prep
Preprocesses a single pass file, and writes the result to another pass file. The function has random access to all samples in the input and output file. It is called only once before rendering begins.

The execution order of statements is important. Before rendering, all pass prep statements are executed in order; during rendering all pass and pass merge statements are executed in order for every finished rectangle; and after rendering all pass delete statements are executed. It is important to note that pass and pass merge statements are executed for every finished rectangle; this allows mental ray to minimize memory usage because only small sets of samples reside in memory at any one time. It also means that a pass prep statement cannot operate on the currently rendered pass because the pass prep function runs before rendering begins. In general, no two pass or pass merge statements should write to the same filename; this would result in sample data loss because of the per-rectangle interleaving.

All files are created before reading begins; hence, a pass prep statement should not read from a file written to during rendering, and the write filename should not be identical to any read filenames in any statement. In general, all filenames written to should be unique. The pass delete statement was provided to clean up temporary pass files after rendering. Pass files can become quite large, often in the hundreds of megabytes if the resolution, oversampling parameters, and the number and size of frame buffers is large. Note that only mental ray 3.2 and later compress pass files.

The types string controls which frame buffer are written to the pass file. Types apply to the write statement only. The syntax is the same as for output and frame buffer statements, such as +rgba_fp,z". The default is the set of frame buffers defined by the output statement. Note that the main RGBA frame buffer is always written as floating-point RGBA, and that a depth (Z) frame buffer is always present, regardless of the type specification.

See also page pass rendering for more information.

Camera Parameters

There is a variety of camera_parameters that can be listed in the camera. Some of them can be overridden for a standalone mental ray by specifying an appropriate command line option.

There are four camera statements that accept shader lists: output, lens, volume, and environment. As with all types of shaders, more than one shader can be listed, or more than one such statement can be given, to attach multiple shaders (or output files in the case of the output statement) to each type. In an incremental change (the incremental keyword is used before the camera keyword), each of the four first resets the list from the previous incremental change and does not add to the existing list, as multiple statements inside the same camera ... end camera block would.

The following camera_statements are supported:

focal distance|infinity
The focal distance is set to distance. The focal distance is the distance from the camera to the viewing plane. The viewing plane is the plane that the rendered scene is projected onto; its edges correspond to the edges of the rendered image. A common approach is to set the focal distance to the middle distance of the interesting objects in the scene and then set the aperture such that it is a bit larger than the horizontal extent of the objects in camera space. If infinity is used in place of the distance, an orthographic view is rendered. An orthographic view turns off perspective, all camera rays are parallel. View-dependent surface tessellation is not possible in orthogonal mode.
aperture aperture
The aperture is the width of the viewing plane. The height of the viewing plane is aperture divided by aspect. Together with the focal and aspect viewdefs, aperture defines the lens of the standard pinhole camera.
aspect aspect
This is the aspect ratio of the camera. The default is 1.33. In camera space, aperture is the width of the viewing plane and aperture divided by aspect is the height. The viewing plane is divided into pixels as specified by the resolution viewdef, so the aspect will result in nonsquare pixels if it is not equal to the X resolution divided by the Y resolution. For example, to render a PAL image at a resolution of 720 × 576 pixels, at an image ratio of 3:4 as defined by the PAL standard, pixels are slightly wider than tall, by a factor of 576⁄720 · 4⁄3 = 1.0667. If the aspect ratio is corrected by this number, objects will appear undistorted on a PAL video display (but not on a computer display with square pixels).
resolution xint yint
Specifies the width and height of the output image in pixels.

offset x y
Specifies an offset for the rendered image. The default is 0.0 for both values, which means that the image will be centered on the camera's Z axis. Positive values translate the image up and to the right. The offset is measured in pixel units.
window x_lowint y_lowint x_highint y_highint
Only the sub-rectangle of the image specified by the four bounds will be rendered. All pixels that fall outside the rectangle will be left black.
clip hither yon
The hither (near) and yon (far) planes are planes parallel to the viewing plane that delimit the rendered scene. This is primarily used to keep the scanline projection transformation stable; it does not affect ray tracing. Points outside the space between the hither and yon planes will not be rendered by scanline algorithms (this does not apply to the infinite-radius environment maps because they are not geometric objects). The clip statement specifies the distance of the hither and yon planes from the camera. The defaults are 0.0001 for the hither distance and 1000000.0 for the yon plane.
stereo method eyedistance [ parallax parallax_distance ]
If enabled, mental ray renders two sets of frame buffers, one set for the left eye (prefixed "lft_") and another one for the right eye (prefixed "rgt_"). Stereoscopic rendering will affect the "first hit" renderers only: scanline, rasterizer and tracing eye rays. Other view dependent algorithms like final gathering, importons, or tessellation operate as if a single camera was used, also referred to as the "center eye".
The method should be one of:
off
Disable stereo rendering.
toein
Rotate the camera frustum such that the two camera direction vectors meet at the focal length. This method introduces vertical parallax (incorrect) which increases out from the center of the projection plane and might cause discomfort when looking at stereo images for a longer time.
offaxis
The most appropriate way to render stereo pairs. It introduces no vertical parallax and is therefore less stressful to the eyes. But it requires an asymmetric camera frustum, which most real cameras do not support as of today. The optional parallax value is respected in this mode.
offset
Most straightforward method for rendering stereo pairs. It shifts the eyes along the x axis in camera space. Consequently, there could be objects that are seen by only one eye, which creates discomfort.
The eyedistance is the distance between the two eyes in camera space. A good initial value for this parameter is usually 1/30 of the focal length. The parallax_distance controls the distance of the zero parallax plane in camera space.
Orthographic cameras are not supported.
volume [ shader_list]
This statement specifies volume (atmosphere) shaders. The atmosphere affects all rays passing through the space outside objects by attenuating the color of the ray. It is possible to specify a volume shader for the inside of objects too, by naming a volume shader in the material statement (see above). If no shader_list is specified, the existing volume shader list is deleted; this is useful in incremental changes. If a list is given, it replaces the current list if this is the first volume statement in the camera block, or it is appended to the current list otherwise.
environment [ shader_list]
This statement specifies environment shaders. Environment shaders control the color returned by primary rays that, after leaving the camera, never strike any object in the scene. They are similar to environment shaders named in materials, which control reflection rays cast by the material shader that leave the scene without striking another object (or exceeding the trace depth). If no shader_list is specified, the existing volume shader list is deleted; this is useful for incremental changes. If a list is given, it replaces the current list if this is the first volume statement in the camera block, or it is appended to the current list otherwise.
lens [ shader_list]
Lens shaders simulate lenses by changing the camera. If no lens shader is present, the camera is a pinhole camera that casts rays from the origin in all directions that strike the viewing plane, or an orthographic camera that casts parallel rays if focal infinity is specified. A lens shader accepts the origin and direction of the camera ray, modifies them, and casts a new primary ray. Examples for lens shaders includes fish-eye lenses that exaggerate the direction vector in a nonlinear way (there is a code example in the Shader section). Multiple lenses can be specified in the camera; the n-th lens shader receives the origin and direction computed by the n-1st lens shader. If no shader_list is specified, the existing volume shader list is deleted; this is useful in incremental changes. If a list is given, it replaces the current list if this is the first volume statement in the camera block, or it is appended to the current list otherwise.
frame frameint [ time ] [ field fieldint ]
Every camera should contain the current frame number frame. The time in seconds can optionally be specified as time. Optionally, a field number field can be specified; by convention, field should be 0 when rendering frames, 1 when rendering the first (odd) field, and 2 when rendering the second (even) field. If the field modifier is missing, the field number defaults to 0. mental ray currently does not use any of these values but makes them available to shaders.
data "data_name"|null
This statement attaches user data to the camera. The argument must be the name of a previously defined data element in the scene file. If null is specified instead of a data element name, a previously existing data reference is removed.

Copyright © 1986, 2015 NVIDIA ARC GmbH. All rights reserved.