Share

Imager Inference - Arnold User Guide

This imager is capable of performing inference on image-to-image machine learning models using the ONNX framework.

Note:

These image models may lack a coherence between frames and fail to maintain visual consistency, temporal stability, and semantic continuity, leading to noticeable discrepancies or disruptions between consecutive frames.

Input

Input points to the previous imager operation in the chain (so that you can stack them and apply multiple operations serially).

Enable

Enables this imager.

Layer Selection

A glob pattern or regular expression that selects the AOVs processed by the imager. The selection can be multiple selections joined by:

  • or (union)
  • and (intersection)
  • not (negation)
  • and not (exclusion)
  • () for nested selections

By default, glob matching is used unless the selection is in a regex quotes (r'<my_regex>'). For example:

  • specular or diffuse
  • not r'sss_(direct|indirect)
  • r'color_(mask1|mask2)' or r'mask[34]'

Inference Device

Specifies the device type used by the Inference imager. By default, inference uses the CPU.

To enable GPU inferencing, you must download and install additional libraries. The Arnold Inference imager leverages the ONNX Runtime, which requires the ONNX Runtime CUDA provider and its dependencies for GPU acceleration. The table below lists the necessary library files for both Windows and Linux platforms that need to be installed.

If GPU is selected but the required libraries are missing, Arnold automatically reverts to CPU inferencing.

Library Windows libraries Linux libraries Notes
ONNX Runtime 1.19.2 onnxruntime_providers_cuda.dll libonnxruntime_providers_cuda.so The CUDA provider library needs to be copied into the Arnold bin folder (the same folder as ai.dllor libai.so). This library can either be downloaded from the pre-compiled onnxruntime-<platform>-x64-gpu-1.19.2 package or compiled from the ONNX runtime source code.
cuDNN 9.10.1 cudnn64_9.dll

cudnn_cnn64_9.dll

cudnn_engines_precompiled64_9.dll

cudnn_engines_runtime_compiled64_9.dll

cudnn_graph64_9.dll

cudnn_heuristic64_9.dll

cudnn_ops64_9.dll
libcudnn.so.9.10.1

libcudnn_cnn.so.9.10.1

libcudnn_engines_precompiled.so.9.10.1

libcudnn_engines_runtime_compiled.so.9.10.1

libcudnn_graph.so.9.10.1

libcudnn_heuristic.so.9.10.1

libcudnn_ops.so.9.10.1
These libraries either need to be added to your system PATH/LD_LIBRARY_PATH or copied into the Arnold bin folder (the same folder as ai.dllor libai.so).
CUDA 12.2 cudart64_12.dll

cublas64_12.dll

cublasLt64_12.dll

cufft64_11.dll
libcudart.so.12.1.55

libcublas.so.12.1.0.26

libcublasLt.so.12.1.0.26

libcufft.so.11.0.2.4
These libraries need to be added to your system PATH/LD_LIBRARY_PATH. This usually happens during the CUDA toolkit installation. Alternatively, the libarary can be copied into the Arnold bin folder (the same folder as ai.dllor libai.so).

Model Filename

Path to the ONNX model file.

Transform Preset

Common transformation presets for configuring input and output pixel values.

  • SRGB_255:
    • Color Space: sRGB
    • Input Multiply: Multiplies input pixel values by 255.
    • Output Divide: Divides output pixel values by 255.
Note:

If the inference imager renders black then it is likely that the transformation preset will need to change.

Input Multiply

All pixels are multiplied by this factor before performing the inference.

Output Divide

All pixels are divided by this factor after performing the inference.

Color Space

Defines the color space pixel values to be transformed to before inference. The inverse transform will be applied after inference.

Blend

Linear interpolation between the input and the inference output.

Blend Mode

Specifies the blending operation to combine the input and the inference output.

Was this information helpful?