Table of Contents | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
Overview
Using Motive’s custom lip syncing tool, you are able to create expressions that will be displayed on a character depending on the dialogue audio that the character is currently playing.
Creating a Lip Sync Map
You can create a new Lip Sync Map in the project folder using the Motive context menu:
...
The lip sync map editor allows you to set different levels of audio to different expressions, where each expression contains blend shapes and their expected value to be reached at that volume level.
When you create a lip sync map, you will get an asset that looks like this:
...
By clicking on “Add Volume Level” you can add your first volume level.
...
The Volume Level set here will be the volume at which each shape will be at their selected value. If the current audio volume is between two volume levels, the expression will be automatically adjusted depending on where it falls between them.
Add a your first shape to get started.
...
In this case, we have one Volume Level, and when the audio reaches this level, mouthOpen will reach 0.7. While the audio is in between 0 and 0.6, mouthOpen will be set to its corresponding position between 0 and 0.7.
You are also able to control different shapes at different volume levels.
...
For example, from volume 0 to 0.2, mouthOpen will be mapped from 0 to 0.3, but as the volume surpasses 0.2 the next level will take over. So from volume 0.2 to 0.6, mouthOpen will be mapped from the previous 0.3 to 0.7, and the rest of the new shapes will be mapped from 0 to their value.
Using a Lip Sync Map
This lip sync will be used when no other lip syncing service is enabled. To use a lip sync map, you must add a Motive Lip Sync Configurator to the character:
...
Here you are able to supply a lip sync map and adjust the volume sensitivity. If you find you are reaching the highest volume setting too fast, consider lowering the sensitivity, and if you find that you aren't reaching the higher volume levels, consider raising it.
Related Articles
Characters and Character Actions
Characters, Animation Controllers and Storyflow
...
Overview
Motive SDK currently supports three different lip sync engines. All three engines have been configured to work with Motive’s Expression Map system.
uLipSync
The default Lip Sync engine is based on a modified version of uLipSync. uLipSync uses “Mel-Frequency Cepstrum Coefficients (MFCC), which represent the characteristics of the human vocal tract.” uLipSync is supported on all platforms (including WebGL).
Volume-Based Lip Sync
Volume-based lip sync (MotiveUniversalLipSync
) uses the volume of the audio clip to determine lip and mouth shapes. It is less sophisticated than uLipSync, but it may be a better option for non-human characters (cartoon characters, etc.) It is available on all platforms.
Oculus Lip Sync
Motive can also work with Oculus Lip Sync by enabling the build flag MOTIVE_OCULUS_LIP_SYNC
. Note that Oculus Lip Sync can only be run on Meta or Oculus devices.
Links
Child pages (Children Display) | ||
---|---|---|
|