Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

Version 1 Next »

Motive uses Expression Maps to apply facial expressions to avatars--either scripted with a Character Action or through a system feature such as lip sync. Expression Maps allow you to map shapes from the Motive system to any blendshapes and/or transform deltas on an avatar.

Creating an Expression Map

You can create a new Expression Map in the Project folder using the Motive context menu:

The Expression Map editor lists each shape defined in Motive. For each shape, you can specify one or more blendshapes or transform manipulations that are applied when that shape is used in an expression:

Here you can see some of the default blendshapes used for avatars from Character Creator:

Although most of these shapes map directly to blendshapes in the skinned mesh renderer, we can see that this mapping uses a Mecanim Bone to handle the mouth open/close shape:

In this case, the full range of motion of the mouth is limited to 0-20 deg rotation around the Z axis.

Using a Custom Expression Map

To use a custom Expression Map on an avatar, add Expression Map Controller and set the Expression Map field to the map you created.

  • No labels