Share
 
 

Machine Learning (ML) Deformer

The ML Deformer lets you efficiently approximate complex deformation to speed up animating, blocking, or crowd scenes.

The Maya ML Deformer uses machine learning to approximate complex deformations for quick and interactive results. While the ML Deformer only creates an approximation, it accelerates character rig posing and playback to help layout artists and animators who want fast results. The ML Deformer helps you create portable characters that deform accurately without the need for complex, custom setups.

The ML Deformer works by learning from an animation sequence that provides a wide range of motion of the target geometry. This animation sequence can consist of motion capture data, existing keyframed animations, randomization through ML Deformer's Pose Generator functionality, or a combination of all three. This sample data, along with the values of the Driver Controls that influence the pose on each frame, are used to train the ML Deformer.

Once trained, you can toggle between the ML Deformer and the original complex source deformer, and use the approximation during the animating process, with the complex deformer being used during rendering. The ML Deformer also lets you evaluate the effectiveness of the trained model using the Training Results window and the Target Morph. The ML Deformer lets you easily train multiple different models and switch between them to further evaluate the effectiveness and comparative quality of different trained models.

Use the ML Deformer for characters where precision isn't critical, such as for background and crowds.

Tip: You can test and experiment with ML Deformer sample files in the Content Browser (Windows > Content Browser > Examples > Animation > ML Deformer).

Sample ML Deformer animation in the Content Browser

Known Limitations

The following limitations apply to the ML Deformer:

  • Depending on the rig, setting the Delta Mode to Surface may produce artifacts and incorrect jagged deformations in some cases. This happens when the surface vertex frames aren't calculated consistently, often due to overlapping vertices in certain poses. It's possible the results can be improved if the bad poses are removed from the training set. However, the ML Deformer will still perform poorly on those and similar poses after training.
  • When trained across a large number of controls, the ML model tends to learn incorrect associations between controls and deformations in unrelated parts of the mesh. Training on poses that trigger fewer controls at once can help with this issue.

Was this information helpful?