Share

Inference (What's New in 2025.1)



Use the Inference node to apply an ONNX machine learning model to your clip.

Highlights include:

  • Use a JSON sidecar file to set the proper inputs, outputs, and attributes for your model.
  • Package the model, JSON, and thumbnail files into a single package using the inference_builder command line.
  • Use the ML Engine Cache button to enable NVIDIA's TensorRT cache and improve the initialisation time (Linux only).
  • Quickly load a model using the Search widget.

Hide the Inference Node

The Inference node can be hidden from the application using the DL_DISABLE_INFERENCE_NODE environment variable. This is targeted towards larger studios that would like to prevent the usage of the node in their pipeline.

When the node is hidden, it does not appear in the Node bins and cannot be added using the Search widget. If an existing setup containing an Inference node is loaded in an application running with the variable enabled, the node appears greyed out in the Schematic view and does not output media. Moreover, the setup cannot be rendered or sent to Burn and Background Reactor.

For more information, see:

Was this information helpful?