...
Motive Volume-Based Lip Sync is a simple lip sync engine that uses the volume of the audio clip to drive lip sync expressions. This approach is limited and has been phased out in favour of uLipSync for human characters, but may still be a good option for characters with less complex facial expressions.
Overview
Using Motive’s custom lip syncing tool, you are able to create expressions that will be displayed on a character depending on the dialogue audio that the character is currently playing.
Creating a Lip Sync Map
You can create a new Lip Sync Map in the project folder using the Motive context menu:
...
For example, from volume 0 to 0.2, mouthOpen will be mapped from 0 to 0.3, but as the volume surpasses 0.2 the next level will take over. So from volume 0.2 to 0.6, mouthOpen will be mapped from the previous 0.3 to 0.7, and the rest of the new shapes will be mapped from 0 to their value.
Using a Lip Sync Map
This lip sync will be used when no other lip syncing service is enabled. To use a lip sync map, you must add a Motive Lip Sync Configurator to the character:
...
Here you are able to supply a lip sync map and adjust the volume sensitivity. If you find you are reaching the highest volume setting too fast, consider lowering the sensitivity, and if you find that you aren't reaching the higher volume levels, consider raising it.
Related Articles
Characters and Character Actions
...