Forum rules - please read before posting.

Lip syncing synchronization issus

Hi,

I'm trying to use the "Lip syncing" feature with real time voice speaking (SAPI TTS)

My test environment:
Win 10, Unity 2019.4.17f1, AC 1.72.4, RTVoice pro 2021.1.0
I use RTVoice pro for the real time speaking (with it's AdventureCreator connector)
I use AC Lip syncing settings: "From Speech Text" mode, "Portrait And Game Object"
I have an AC character with a "Use interaction" that runs an actionlist with Dialouge:Play speech

The main problem is that the real time speaking has some delay before it starts (about 1.5sec).
The AC lip syncing starts immediately.
So when the AC Play speech starts, the character moves his mouth for about 1.5sec, and only then I hear the real time audio.

I tried:
1. Adding [wait:2] at the beginning of the text in the dialogue action, but it seems that it has no effect on the lip syncing.
2. Writing some code that turn off the lip syncing when speech starts (AC.EventManager.OnStartSpeech), and after 2 sec turn it back on. The lip syncing wasn't turned off (although the inspector displayed that it was turned off).

Is Lip syncing not effected by Text tokens?
Is there a way to work this out, add delay before lip syncing starts?
Thanks

Comments

  • Is the issue that the audio starts too late, or that AC's lipsyncing starts too soon?

    The RTVoice/AC integration is handled on the RTVoice side, so you may need to contact them regarding the delay in the audio.

    I shall look into the effect of [wait:X] tokens on automatic lipsyncing - are you showing it through the use of portrait graphics? I suppose, though, that both AC and RTVoice's integration should respect their presence, technically.

  • The audio start too late (because of the used TTS engine delay, I guess it's an inherent delay, every engine will have it's own delay)
    AC lipsyncing start right on time (when AC event OnStartSpeech is being raised).

    I use "Portrait And Game Object" with the gameobject meshrenderer's blendshapes.

    About respecting the tokens, I think in any case, I will have to process the text I write into a "Dialogue : play speech" action because:
    1. If lipsyncing will respect the [wait], I will have to remove it before I pass the text to RTVoice. Unless RTVoice developer will add it to the AC connector.
    2. When I use XML tags to control the voice characteristics, I don't want AC subtitles to display it. Unless you can add a subtitles option to ignore XML tags.

    But processing the text is not a problem.
    The problem is the delay, which can be solved by:
    A. Lipsycing respecting the [wait] - then I'll will be forced to add [wait:2] to all my "Dialogue : Play speech" actions, and then remove it before passing it to RTVoice.
    B. By adding a "wait before lipsycing start" float (Speech settings menu, besides the lipsycing "Process speed" slider)
    C. By giving control with scripting over when to start the AC lipsycing.

    I prefer the C or B option.

    Thank you for looking into it.

  • When I use XML tags to control the voice characteristics, I don't want AC subtitles to display it. Unless you can add a subtitles option to ignore XML tags.

    It's not XML, but it is possible to define custom speech text tokens - see the Manual's "Speech event tokens" chapter for details.

    By giving control with scripting over when to start the AC lipsycing.

    Any Manager field can be changed at runtime through scripting - just right-click on the field's label to get an API reference to it.

    In the case of the Speech Manager's "Perform lipsync on" field, it's:

    AC.KickStarter.speechManager.lipSyncOutput
    

    You could try, for example, setting this to LipSyncOutput.Portrait when a speech begins, and back to LipSyncOutput.PortraitAndGameObject after 2 seconds.

    Though, this scenario seems to be a good case for allowing characters to have an internal "perform lipsyncing" property, that you could also modify through script. I'll have a look into this possibility.

  • Though, this scenario seems to be a good case for allowing characters to have an internal "perform lipsyncing" property, that you could also modify through script.

    I'm forgetting myself - the Player / NPC script's isLipSyncing property is already public.

  • So, I tried using the isLipSyncing property, but lipSyncing still start even if I set the property to false.

  • I decided not to use the "real time" speech (because of other reasons then the "delay"). I will use audio files instead, so, for now, I don't need a solution for controlling the "lip syncing" delay.
    Thanks for your help so far, If you want me to keep debugging it, I'll be glad to assist.

  • No problem. Just for completeness:

    I tried using the isLipSyncing property, but lipSyncing still start even if I set the property to false.

    This'll be set to true automatically when speech begins, but is before the OnStartSpeech event is called. Setting it to false inside that event should be enough to turn it off for good.

Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Welcome to the official forum for Adventure Creator.