Pixel Coverage

mental ray supports to compute the coverage of a pixel by samples in that pixel. The data type "coverage" can be set for a frame buffer to enable this feature. All geometry scene elements that are marked with a non-zero label will be considered for coverage calculation.

If oversampling is used then mental ray may take several samples (primary rays) in a pixel for anti-aliasing. The samples are averaged to produce the final pixel value. At points where one object overlaps another, one object may account for more of the pixel than the other, which might just touch a corner of the pixel. In such a case, the coverage frame buffer specifies exactly how much of the pixel the "more prominent" object covers. This is a number in the range 0.0 (the pixel is empty) to 1.0 (an object covers the pixel completely). The detection if samples belong to a certain object is based on the label value, which can be defined with tag statements in the scene file. The precision of coverage values is bounded by the number of samples taken, which depends on the sampling settings.

The coverage buffer can be written to a file of scalar format such as st. mental ray also supports to write in a color format by performing automatic conversion to RGB/A upon storage to such file formats like TIFF.

If coverage calculation is enabled by adding the appropriate frame buffer type, then the depth, normal, motion, and tag frame buffers, if enabled and left at their default filtering setting (which is off), are guaranteed to come from the object that had the most coverage.

Copyright © 1986, 2015 NVIDIA ARC GmbH. All rights reserved.