Share

Inference (What's New in 2025.2)

Use the Inference node to apply an ONNX machine learning model to your clip.

Indicates a feature suggested and voted up by users on the Flame Feedback portal.

Channels Support

Channels required and output by a model can be defined on an input and output basis inside the JSON sidecar file.

Inputs Channels token

Use the Channels token to specify the channels to be used when performing the inference. The string can be any combination of R, G, and B for inputs expecting colours or A for inputs expecting a single channel for the matte. The order of the channels affects the result. For example, BGR won't yield the same result as RGB.

Since Batch nodes do not support RGBA clips as a single input, you must define two input sockets of the same name in the model's JSON sidecar file. In the sidecar file, assign the RGB channels to the first input; if the model requires an RGBA input, add the A channel only to the second input. See Inference Builder for an example.

Channels Rule
RGB Can be any combination of R, G, and B for inputs expecting colours.
Alpha Must be A for inputs expecting an alpha channel.

Outputs Channels token

Use the Channels token to specify the channels output by the model when the output is not an RGB image. For example, a model exporting an STMap (Red and Green only) needs to have its channels set to "RG" in the model's JSON sidecar file.

Channels Amount Rule
1 channel Set the Channels token to "A".
2 channels Set the Channels token to either "RG" or "GR". The latter can be used to invert the result.
3 channels Set the Channels token to any 3-letter combination (like "RGB", "BGR", or "GRB"). The order of the channels affects the result.


CPU Support (Rocky Linux)

On Rocky Linux, an inference can be run using either the CPU or GPU. The only option available on macOS is still CPU.

Running the inference is faster on the GPU, but larger frames that cannot be rendered using it can now be rendered using the CPU.

Use the CPU / GPU drop-down button to select the mode of your choice. The ML Engine Cache button is only available when GPU is selected.



Bit Depth Support (Rocky Linux)

On Rocky Linux, when using the GPU rendering platform, the choice to run a model in 16-bit fp or 32-bit fp is offered. In previous versions, models were only run at 32-bit fp.

Selecting 16-bit fp offers the following advantages:

  • It allows running a model on higher resolution images, since running in 16-bit fp requires less memory.
  • It provides better performance.

Modification to the inference_builder command line

The default value of the JSON file's KeepAspectRatio token has been changed to False since it yields better results for most models.

For more information, see:

Was this information helpful?