Forum rules - please read before posting.

Can ActionList parameters be translated?

Hi!

Could you please tell me, if there's a way to make string parameter of ActionLists localisable? Or the only way is to create variable and pass it to the ActionList? It's doable, but I have a lot of strings that are being passed as parameters and it seems to me that's a lot of additional work to create a variable for each one. Maybe I should modify some engine's code to parse string ActionList parameters the same way as text lines in Dialogue -> Play speech?

Thank you!

Comments

  • Built-in, no. Parameters aren't designed to hold any specific value, which they would need to be in order to be translatable.

    You're not limited to storing translateable text in variables, however. See the Manual's "Custom translatables" chapter - you could, feasibly, store your strings in a dedicated script, come up with a way to reference them at runtime (e.g. a dictionary) and then have your String parameters instead represent the reference (e.g. a Dictionary key).

    How are these parameters used that they require translating?

  • edited April 2021

    Thanks for the answer!

    I have multiple ActionLists for similar behaviour, so I wouldn't copy-n-paste the same actions and could modify this behaviour at single place if needed.

    For example, I have several interaction buttons (mouth, nose, ear, hand, bioport, etc), and they have an ActionList for each other, that's one for the nose, for example, called "alNose":

    And I run "alNose" at every hotspot when the nose is used:

    So every time the player smells something, there'll be a sniffing sound and a nose speech repsonse.

    Or, for example, those interactions buttons can be added and removed (the protagonist gets some additional organs or temporarily transplants his own to another creatures). I've got an universal "alAddBInt" ActionList:

    And then I use it like this:

    So whenever the new interaction button is added, the narrator will tell the player about it and the biointerface will speak a little about itself, what's its purpose and so on.

    I've looked through CustomTranslatableExample script, but to be honest, I don't get it. Where should I add this script in my cases? And how would I get the dictionary values while running the ActionLists? It's a bit confusing for me now...

  • If the text is to be used for speech lines, it's best to create those lines as regular Speech Actions - so that they get probably gathered by the Speech Manager as speech text, with all the added features that come with it, e.g. voice audio possibility etc.

    This need not be done within the same ActionLists that you're using, however. If you create a separate, dedicated scene or ActionList asset that contains all the possible speech Actions, you can gather them up as normal and then use parameters to reference their ID values.

    Once speech is gathered up by the Speech Manager, it'll be given a unique ID number that gets displayed in the Action. The Dialogue: Play speech ID Action, which is available on the wiki, allows you then play a pre-existing speech by referencing that ID. This ID number can also be passed to the Action as an Integer parameter.

  • Thank you!

    If I have to create ActionLists for speech gathering, maybe it'd be better, if I create separate CutScenes/ActionLists for every speech line, so I could pass these ActionLists as parameters? That would solve the problem and make actions more readable, like, the parameter will be "csBioportTellsAboutItself" instead of "[localvar:6]" or passing an integer number, I guess.

  • That would be more in keeping with the recommended workflow.

    The ActionList: Run Action accepts GameObject parameters for scene-based ActionLists, and Unity Object parameters for ActionList assets.

  • I tried it and it worked! So, now I have a solution, at least =). But it's a bit inconvenient, 'cause it requires some additional actions to do (adding cutscenes, calling them). Of course, if there's no other option, I'll follow this way. But I get another idea, maybe you could help me with it, please?

    What if I play a certain ActionList for certain speakers before they speak their words? Then I can use the general Dialogue: Play Speech Action and all special actions will be performed automatically and speech will wait until they're finished before playing. Something like this:

        private void OnEnable()
        {
            EventManager.OnStartSpeech += EventManager_OnStartSpeech;
        }
    
        private void OnDisable()
        {
            EventManager.OnStartSpeech -= EventManager_OnStartSpeech;
        }
    
        private void EventManager_OnStartSpeech(Char speakingCharacter, string speechText, int lineID)
        {
            if (speakingCharacter.name.StartsWith("Nose"))
            {
                var nose = Resources.Load("alPreNose") as ActionListAsset;
    
                nose?.Interact();
            }
        }
    

    But this code doesn't work, unfortunately =).

  • I'm not sure I quite follow the intent, but I can't see anything wrong with the code - provided that it's not on an object that isn't enabled at all times (e.g. a Menu UI).

    Try placing Debug.Log statements inside the EventManager_OnStartSpeech function to see when it's being called, and with what parameter values.

  • I want to use the Dialogue: Play Speech Action like I used it before, delegating the "alPreNose" ActionList to run all the specific actions automatically every time when Nose is speaking (like, playing the sniffing sound, and so on).

    Using the code above I can run the ActionList, but the Speech action and the alPreNose actions are playing simultaneously. Actually, I'm kinda reconciled with it already =). But if there's a simple way to make speech action wait until my alPreNose actions are finished, it'll be just perfect!

  • But if there's a simple way to make speech action wait until my alPreNose actions are finished, it'll be just perfect!

    This is the same speech Action that causes the "Nose" character's OnStartSpeech event to run?

    AC also has an OnStopSpeech event, which you can hook into as well / instead depending on need.

    For a list of all speech-related events, see the Manual's "Speech scripting" chapter.

  • edited April 2021

    This is the same speech Action that causes the "Nose" character's OnStartSpeech event to run?

    Yes, it is. So, I click with the nose button on hotspot, the nose starts to tell its line and the OnStartSpeech event is fired. And at this moment after calling nose?.Interact() both actions are running in parallel: playing speech line of the Nose and "alPreNose" ActionList. And I'd like, if it's only possible, to play my ActionList beforehand, wait until it's finished, and only then run nose's speech Action.

  • If it was just one pair of ActionLists, you could possibly begin the speech ActionList with an ActionList: Check running Action to loop in on itself if alPreNose is found to be running.

    More complex, but do-able, would be to have a script that listens out for given ActionLists and stores whether any of them are being run or not in an AC variable. However, both cases would have to involve inserting Actions at the beginning of your speech ActionList, so I don't know how appropriate that would be.

  • Thank you for your help and patience! I'll try both of it!

Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Welcome to the official forum for Adventure Creator.