The ML Deformer Export Training Data window is used to export the data needed to train the Machine Learning model. This training data takes the form of example poses of the original, complex, deformer stack that the model is learning how to approximate. Different frames in the Maya Creative scene should be set up with different poses to allow the model to determine correlations between the inputs (the assigned controls) and the output (the deformed geometry). See ML Deformer Control Collector attributes for more information on how these example poses can be randomly generated. Alternatively, populating the scene with existing animations that provide a large range of motions may be sufficient for training and/or be used in addition to the randomly generated poses in order to provide additional examples.
The ML Deformer Export Training Data window lets you set up the sample poses to teach the ML Deformer. The training data poses are snapshots of the difference between the Target object and the ML Deformer model at different points in the deformation.
To learn how to use the ML Deformer to transfer complex deformations to a source object, see Create an ML Deformer and Create an ML Deformer using separate Target geometry.
To open the ML Deformer Export Training Data window
- In the
ML Deformer Attributes tab, click the
icon.
- In the ML Deformer Attributes tab, right-click the ML Model column, and choose Export Training Data….

General Export Settings
This area of the ML Deformer Export Training Data window lets you set the export location and what frame range of the scene should be used for the training set.
- Training Data Location
- Click Browse to navigate to a folder where you want to save the training data. This can be a temporary directory as the training data is not needed once the model is trained, except if you want to retrain it with the same data (for example, to test different training settings).
- Export Start Frame/Export End Frame
- Specify the range of the animation that you want to use for creating the training data. The more frames used for training, the more accurate the resulting approximation of the deformation is likely to be. However, the larger the range of animation, the slower the training and export process becomes.
Geometry Export Settings
This area of the ML Deformer Export Training Data window lets you specify how the exported geometry should be represented for processing by the Machine Learning model.
- Training Data Name
- Enter a name for the folder that will contain the exported training data. Click Change to reuse a folder from a previously exported training data set. The folder will be given a suffix of .mltd for easier identification of its purpose, for example, using "test" creates a folder called test.mltd in the specified directory.
-
Tip: Hover over Training Data Name or the Change button to see information about the Training Data, such as count, frame range, and so on.
- Delta Mode
- Set a mode for how the difference between the base and target geometry is represented in the exported data. Specifically, the "delta" between the original, "Source", geometry and the complex "Target" geometry being approximated. The difference between the Target and Source geometry is what ML Deformer is ultimately trying to learn and predict. Select from the following modes to choose different options for how the Delta is calculated and represented.
-
Note:
- Depending on the rig, setting the Delta Mode to Surface may produce artifacts and incorrect jagged deformations in some cases. This happens when the surface vertex frames aren't calculated consistently, often due to overlapping vertices in certain poses. It's possible the results can be improved if the bad poses are removed from the training set. However, the ML Deformer will still perform poorly on those and similar poses after training.
- When trained across a large number of controls, the ML model tends to learn incorrect associations between controls and deformations in unrelated parts of the mesh. Training on poses that trigger fewer controls at once can help with this issue.
-
- Offset
- Use Offset as a last resort for troubleshooting when you are not satisfied with the results, as it has difficulty with long joint chains.
- Based on Local rotation, Offset uses the object-space difference between the vertex position before deformation and after deformation for the approximation. Offset mode avoids potential problems with complex Delta calculations, but is "harder", that is, more work for the ML model to learn.
- Surface
- Surface mode represents the Delta in terms of displacement along the surface of each vertex. It ignores the parent transformation(s) and is recommended for the approximation of deep joint hierarchies. Surface mode, the default setting, generally produces good results. However, at times, such as when vertices overlap, Surface mode may produce unwanted artifacts, which you can minimize with the Smoothing Iterations value.
- Export Surface Information
- Use the Export Surface Information setting to include additional vertex frames with the exported data. Use in conjunction with the Learn Surface setting to improve noisy training results.
- Smoothing Iterations
- Use the Smoothing Iterations setting to reduce artifacting when using the Surface Delta Mode. If the Smoothing Iterations value is greater than 0, a Delta mush algorithm smooths the geometry during export.
- Reset to Defaults
- Restores the original settings of the Export Training Data window.
- Export
- Creates ML Deformer training data using the settings specified in this window. When complete, the exported data will be given the name specified under "Training Data Name", allowing you to distinguish it from others and to switch between training sets when training the Machine Learning model later.
- Save
- Click Save to save the current settings in the Export Training Data window.