Forum rules - please read before posting.

Lip-Sync Fuse Character

Hi,

has anybody managed to get lip-syncing to work with Mixamo Fuse characters yet? They provide tons of blendshapes but no phonemes. The only reasonable I found was "Mouth.open" (index 35). 

I created only one frame and tried to simply map all vowels to this one and leave the consonants out to get some simple mouth open/close anmiation but that does not work:
  • mouth never closes for the whole speech
  • attached beards float in front of the mouth
In an ideal world Fuse 1.4 introduces phonemes but I think that will still take a while. How could we go from here now? I think what I would need is logic like this:
  • Open mouth for specific phonemes, otherwise close again
  • Be able to specify maximum blendshape value, otherwise mouth.open at 100 looks like he is trying to swallow the world ;-)
  • Allow multiple shapable scripts on different body parts
    • one on "Body" to manage the mouth open
    • one on "Beards" to assign the same blendshape value to the beard so that it follows along
Is that doable? Fuse is an aweome package and it would be great to get it to work inside AC.

image
«1

Comments

  • As noone has commented yet I created a feature proposal for fuse. If you are interested as well then please vote :-) 

  • edited May 2015
    If phonemes don't make it into Fuse I guess the other approach would still be needed. Any ideas on how to do that in AC? Max value on blendshape and multiple blendshapes manipulated synchronously?
  • AC's lipsyncing has two parts: generating the data in the first place, and doing something with it.  3D speech movement works by normalising blendshapes so that only one is "active" at a time.  But it would be possible to highjack this with a custom script - the data being generated can be used however you want.
  • Updated link to relevant Mixamo thread

    Could you provide some info on how you'd approach writing a custom script for this? Essentially each phoneme would need to be mapped to a set of values for Mixamo's dozens of blendshapes. Like

    Mouth Left: 5.1
    Mouth Right: 5.2
    Mouth Open: 6.3
    Tongue Out: 4.4

    etc...
  • As a quicker solution I've made this modified version of AutoLipSync that uses a blend shape instead of a jaw bone to animate based on the audio volume.

    This seems like a good quick way of doing basic lip syncing with Mixamo characters.

  • Thanks Zyxil :)
  • I have the same problem, Fuse characters has a lot of blend shapes that can do most facial movements, how can I in a simple way group some of these into the phonemes that AC supports for lipsynch as well as into some basic facial expressions?  Thanks!
  • I have added Rogo Lipsync and I am trying to write an action to activate it from the action list.
    I am not sure how to adjust it to run it properly. Anyone can suggest what I need to change here?
    I am trying to follow the instructions at http://www.adventurecreator.org/tutorials/writing-custom-action
    I hope someone can suggest the changes needed in the script below. Thx!!

    using UnityEngine;
    using System.Collections;
    using RogoDigital.Lipsync;
    #if UNITY_EDITOR
    using UnityEditor;
    namespace AC
    {
    [System.Serializable]
    public class ActionLipSync : Action
    {
    public LipSyncData clip;
    public LipSync[] characters;
    public ActionLipSync ()
    {
    this.isDisplayed = true;
    category = ActionCategory.Custom;
    title = "Activate Lipsync";
    description = "Runs a Rogo Lipsync file.";
    }
    override public float Run ()
    {
    foreach(LipSync character in characters)
    {
    character.Play (clip);
    }

    }
    }
    }


  • Seriously, when I came across this, it took me less than 30 mins to create a Fuse character and have him do an absolutely passable lip-synced version of Winston Churchill's June 1940 speech in AC. Then my son asked me what the exact point was ... :-)
  • @Snebjorn, cool, how do you get the character to start talking in AC with lipsynch? Do you customize an Action in the Action list? Do you write a short script? Hope you can help telling the process. Thanks!
  • It's actually exceedingly simple: it's handled through an Audio Source on the character, so no scripting or actions or anything. I believe Chris did some extra integration for 2D characters, but that's not needed for 3D.
  • @Snebjorn, maybe I am not clear. What I mean is that how do you trigger the specific audio clip with lipsync? Let's say you have a character and want to have him saying something at a specific instant. How do you trigger a specific audio clip with lipsync. I know that you can do lipsync by using 'by awake' so it plays when the game starts but that is too limiting. Hope you can describe. Thanks!
  • Not sure what you mean by "at a specific instant"... I have an example in one of my scenes where an NPC makes a remark (without starting a conversation) when the player walks through a trigger - the action list is just "Dialogue -> Play speech" with the NPC set as the speaker. Is it that sort of thing you're after?
  • Yes, in the tutorial, to do lip sync you add to Dialogue-> Play speech a new action called Blend shape. My question is what action do you add to Dialogue-> Play speech to make it work? See the tutorial position  where this is shown by Chris.
  • Sorry at 1h 46 m 40s
  • That was for Facial expression and 1:53:10 for lipsync.
  • Ah, okay - maybe I wasn't making it clear enough that I use Salsa for this, which requires very minimal preparation when using Fuse characters, and works alongside AC without any setup inside AC at all.

    If you're interested in seeing an example, I could throw together a quick webplayer demo?
  • Ok cool, I thought it would be harder, I'll test more, yes sure if you have a demo that would be fun to test!
Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Welcome to the official forum for Adventure Creator.