3D lip-syncing

Adventure Creator can manipulate the mouth movements of Mecanim-based characters to match what they're saying - a process known as lip-syncing.

Lip-syncing works by gathering phoneme (or lip) shapes for a speech line, and using those shapes to animate the character over time. We can choose how these phonemes are gathered with the Speech Manager's Lip syncing option:

What mode it is set to is up to you. From Speech Text requires no external files, and is good if you don't use speech audio, but the results are less accurate.

Read Pamela file will tell Adventure Creator to search for files generated by Pamela, much in the same way as it searches for audio files. Pamela is a Windows-based application and can be downloaded for free here. It is a good choice if you want full control over the phonemes.

Read SAPI file will tell Adventure Creator to search for files generated by Sapi, which is another free Windows application, and is available here.

For a full guide to the various lip-syncing options, see the Manual's Lip-syncing chapter.

To make use of Pamela and Sapi files, you must use also be using speech audio. This tutorial covers the process of preparing audio, and should be read first. Lip-syncing files are expected to be of the same name as their associated audio clip, but with a .txt extension, and placed in a Resources/Lipsync folder. For example, if an audio file Player2.mp3 is placed in Resources/Speech, it's lip-sync file Player2.txt must be placed in Resources/Lipsync.

Regardless of which method you choose, you must then choose which phoneme is used by each frame of animation. In the Speech Manager, click Phonemes Editor.

Click Revert to defaults, and the editor will reconfigure itself to recommended defaults based on your chosen Lip-sync method - though you may have to tweak it further. Phonemes are separated by forward slashes.

Once you have set up your phonemes, we can animate our Character model. In the Speech Manager, set Perform lipsync on to Portrait And Game Object.

You'll then need to add individual animations - one for each phoneme set- to your character's Animator. These'll typically be added into a sub-layer so as not to affect their main animation.

To control playback between them, define an Integer parameter in the Animator Controller named Phoneme, and use it - along with the character's "Talk bool" parameter - to transition between the different phoneme animations while the character talks.

In the character's Inspector, a new Phoneme integer field will have appeared, which AC will set to the active phoneme. Enter in the name of the parameter we just created:

Each animation will have to correspond to each group of phonemes we just listed. For example, according to this list of phonemes, the first animation will correspond to "B", "M" and "P" sounds, while the second will correspond to "AY", "AH", "IH", "EY" and "ER" sounds:

At runtime, AC will then set this parameter to the index number of the active phoneme.