Lip Syncing

Overview

Motive SDK currently supports three different lip sync engines. All three engines have been configured to work with Motive’s Expression Map system.

uLipSync

The default Lip Sync engine is based on a modified version of uLipSync. uLipSync uses “Mel-Frequency Cepstrum Coefficients (MFCC), which represent the characteristics of the human vocal tract.” uLipSync is supported on all platforms (including WebGL).

Volume-Based Lip Sync

Volume-based lip sync (MotiveUniversalLipSync) uses the volume of the audio clip to determine lip and mouth shapes. It is less sophisticated than uLipSync, but it may be a better option for non-human characters (cartoon characters, etc.) It is available on all platforms.

Oculus Lip Sync

Motive can also work with Oculus Lip Sync by enabling the build flag MOTIVE_OCULUS_LIP_SYNC. Note that Oculus Lip Sync can only be run on Meta or Oculus devices.

Links