Getting an avatar in shape to use lip sync takes some trial and error, but here are a few steps you can follow to get lip sync working with dialogue fired from Storyflow.
Drag and drop your avatar into your scene. When you check its inspector, you should see Transform and Animator components.
In the Animator component, add a Controller. The one I added is very simple and handles the Idle state for the avatar.
Next, add a Motive Scene Object component so the avatar can receive dialogue from Storyflow; be sure to name it accordingly.
Then, you need to add an Expression Map Controller component. All you need to add to this is a new custom Expression Map.
You can make a new Character Expression Map by right-clicking your assets and finding the option in the Motive dropdown.
To figure out how to configure your new custom Character Expression Map, I recommend finding the game object in your avatar that contains the “head” part of the model. Once you have found it, expand the Skinned Mesh Renderer Component and its BlendShapes dropdown, then start sliding the BlendShapes related to the character's mouth. This will give you an idea of which BlendShapes manipulate the face in ways that resemble talking. E.g. Mouth_Funnel
Once you have noted which BlendShapes influence the lips the most, open the inspector for your recently made Character Expression Map and start adding settings. The BlendShapes you add will not always be named the same as the options shown in the blank Character Expression Map, so there will be some trial and error. But In general, the BlendShapes you found earlier will have "mouth" in their names and will correlate with other "mouth" options in the Character Expression Map. (For example, if your character's head model contains a Blendshape called "Mouth Funnel" try adding that as a setting under MouthPuckerOpen; this should make the character's mouth open and close with dialogue, but it might look a bit goofy.)
This is what your Avatars Inspector should look like at the end.