Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Overview

Motive SDK currently supports three different lip sync engines. All three engines have been configured to work with Motive’s Expression Map system.

uLipSync

The default Lip Sync engine is based on a modified version of uLipSync. uLipSync uses “Mel-Frequency Cepstrum Coefficients (MFCC), which represent the characteristics of the human vocal tract.” uLipSync is supported on all platforms (including WebGL).

Volume-Based Lip Sync

This engine is now deprecated, although still available for use. Volume-based lip sync uses the volume of the audio clip to determine lip and mouth shapes. It is available on all platforms.

Oculus Lip Sync

Motive can also work with Oculus Lip Sync by enabling the build flag MOTIVE_OCULUS_LIP_SYNC. Note that Oculus Lip Sync can only be run on Meta or Oculus devices.

  • No labels