Here is a summary of some of the new features and feature improvements in version 3.8 of mental ray. Please refer to the release notes for more details and for other changes which are not mentioned here.
This version offers a new rendering mode which generates photo-real imagery using ray tracing technology to also capture global illumination without introducing rendering algorithm specific artifacts and without requiring the use of renderer specific parametrization. When coupled with highly parallel processing platforms, like CUDA-capable hardware, mental ray can deliver these results in a progressive manner, at interactive frame rates. This mode is called the iray rendering mode. It will be controlled with string options or on the command line of standalone mental ray. See also Known Limitations.
This version of mental ray adds runtime support for the features listed in the MetaSL language specification version 1.1. This includes handling of newly introduced material descriptions and BRDF properties, as well as support for scene data access from within MetaSL shaders, for example. See also Known Limitations.
The MetaSL back-end technology for automatic compilation and execution of shader delivered as source code has been improved and extended. In addition to the existing C++ back-end a new LLVM back-end is available. It provides a platform independent way of deployment and shader execution without the need for external compiler or framework installations. Furthermore, a newly added mechanism for shader caching allows to support incremental MetaSL shader editing workflows within mental ray. See also Known Limitations.
mental ray supports stereoscopic rendering in a single run with optimal performance. It will generate the two images for the left and the right eye automatically. Only the primary rendering algorithms like rasterizer and ray tracing (of eye rays) are effected by the slight offset of the eye. Secondary effects with view dependency like tessellation or final gathering are not affected but use the "center" eye as usual. The stereo rendering should not influence existing shaders. The mental ray display protocol has been extended to send stereoscopic information across the connection, and the image tools have been updated accordingly to cope with stereo image files and to display images life when rendering in stereo. The stereo rendering is enabled with a camera setting.
Shaders are able to implement better texture filtering with the help of so-called ray differentials that are natively supported by mental ray. It allows to take advantage of dynamic filter sizes to reduce artifacts, especially in secondary effects like reflections and refractions. Automatic texture filtering can be enabled with a string option. Additionally, extended versions of the texture lookup shader API functions allow to improve existing texture filtering implementations with little effort.
The final gathering (FG) algorithm can now be used together with Importons, or even with Irradiance Particles (IP), to benefit from importance-based computations of those techniques. These combinations are enabled simply by activating both FG and Importons, or FG and IP, which were rejected in previous versions. During rendering, after Importons or IP passes have been finished, the FG points are placed in the scene. If the FG rays have to be shot around, they are not shot in a uniform way, but in the importance-driven way dictated by the Importons, or the IP map (which is used to compute in which directions to shoot more rays and in which ones less). The general outcome of enabling FG with importance-based techniques is better final quality coupled with lower rendering times.
The ray tracing acceleration algorithm BSP2 in mental ray has been revised both in terms of memory usage and speed. It has been optimized especially for handling dynamic scene content with on-demand loaded scene parts provided as assemblies. In the case of motion blur with ray tracing, the memory consumption has been reduced noticeably, with a positive impact on overall performance.
A new acceleration technique for ray tracing hair has been implemented. It leads to noticeable performance improvements both in terms of execution time and memory consumption, especially in the presence of assemblies. When hair is used with the rasterizer, a new mechanism can be enabled with a string option which will decrease memory usage to a fraction compared to previous versions, with the effect of rendering faster by minimizing or even completely avoiding any flushing during rendering. In addition, the default automatic splitting of long hair and large hair counts has been re-designed and greatly improved, so that artifacts of missing hair segments are gone, and tessellation behavior is more adaptive, and memory efficient.
The progressive rendering and IBL algorithms have been improved and extended with capabilities to trade speed for quality. This includes the introduction of a specialized occlusion cache to pre-compute shadowing, as well as support for approximations of controllable quality for the lighting contribution from the IBL environment. A new integration interface has been designed for the implementation of interactive rendering solutions, which allows to apply changes to the model while receiving and displaying the rendered full resolution images at interactive frame rates.
The Map primitive adds support for attachment of global data. This global data is made up of fields similar to the regular per-element data, but there values are considered identical for all elements. The runtime for handling Maps in mental ray has been extended with a caching system to be able to operate on large Maps which can exceed the size of the physical memory installed on a machine. The performance of Map accesses has been improved noticeably.
The image display tool imf_disp
has been re-implemented based
on a unified user interface that looks identical on all supported platforms.
It provides identical workflow and interactions independent of the system.
It adds new features like exposure control (in addition to gamma control),
zoom display in and out, playback of animation sequences from a selection
of files, and anaglyph color viewing of stereo images. Furthermore, the
tools have been extended to report, display, and save separate layers in
multi-layer image files in OpenEXR format.
The following changes were made in the .mi scene description syntax:
"iray" on|off "iray devices" "string" "iray max path length" integer "iray threads" integer
"ray differentials" onThe default is
off
.
stereo
statement :
rendering :
camera "camera_name" ... stereo method eyedistance [ parallax parallaxdist ] end cameraThe method is one of
off
, toein
,
offset
, or offaxis
. See the camera parameter
stereo for more details.
"rast hair disposable" onThe default is
off
.
"approximate"
.
"environment lighting approximate split" numint "environment lighting approximate split vis" numintSee the detailed string options descriptions for more information.
"progressive occlusion cache points" numint "progressive occlusion cache rays" numint "progressive occlusion cache max frame" numint "progressive occlusion cache exclude" numintSee the detailed string options descriptions for more information.
"environment lighting mode" "automatic" "environment lighting cache" on|off "environment lighting resolution" numint "environment lighting scale" factorscalar "environment lighting shader samples" numintSee the detailed string options descriptions for more information.
miCamera
structure has been extended with the new fields
miCamera_stereo
, eye_separation
, and
parallax_distance
for stereoscopic rendering.
"_left"
and "_right"
, respectively. This
implies that stereo buffers are currently always written to separate files.
A new registry setting allows to configure mental ray with custom suffixes.
mi_lookup_color_texture_x
,
mi_lookup_filter_color_texture_x
, and
mi_lookup_scalar_texture_x
have been added.
mi::shader_v3::Mip_remap
has been added.
mi_eval
macros in mental ray,
existing shader packages for earlier versions of mental ray are not binary
compatible and need to be re-compiled. On the other hand, no source code
changes are required if just public interface functions have been used.
Copyright © 1986, 2015 NVIDIA ARC GmbH. All rights reserved.