Audio-driven facial animation workflow
Using a Character Face and the Voice device, you can set up a character to “talk” with an audio file or live audio input as its voice. Through the Voice device, the phonemes in the audio input drive the expressions on the character’s face.
Audio-driven facial animation workflow
To drive facial expressions using audio data:
Load a head model with shapes or cluster shapes.
The shapes on your head model should be appropriate for the number of phonemes you need to make the character convey your audio input, and they should correspond with the sound parameters of the Voice device. See also Phoneme shapes.
Add a Character Face to the scene.
In the Character Face Definition pane, add a custom expression for each phoneme, then map the phoneme shapes you created for your head model to these custom expressions.
Link the Character Face to a Voice device.
Add sound parameters, or phoneme sounds, in the Voice device settings.
When you add additional sound parameters (phoneme sounds) to the Voice device, custom expressions automatically appear in the Expressions pane.
Note:To fine-tune facial animation driven by the Voice device, change the values of automatically added operators, add additional operators, or add other devices to trigger facial expressions.