Photon Voice 2 Workflow

The following document details the steps used to implement SALSA v2 with Photon Voice 2. These instructions only apply to SALSA LipSync v2 and PhotonVoice2. Previous versions are no longer supported.

NOTE: This document is a demonstration example of how to configure Photon Voice 2 and SALSA LipSync v2 to work together in one, simple instance. It is not intended to be a complete solution to cover all situations or scenarios and no support is offered beyond getting this simple scene to work.

Requirements

  • SALSA LipSync v2.4+
  • PhotonVoice 2

Creating a Spawnable Character Prefab

In this workflow, we will be converting one of the Photon Voice 2 character prefab resources to work with SALSAv2. We will replace the existing model mesh with the boxHead model as an example.

NOTE: This document assumes you have already imported/installed Photon Voice 2 (including setting up your AppId and any other Photon-specific settings) and SALSA LipSync Suite v2.

  1. Navigate to the prefab resources folder (Photon Voice 2):
    Assets > Photon > PhotonVoice > Demos > DemoVoicePun > Resources
  2. Select one of the existing prefabs and duplicate it (we will use the ZomBunny prefab).
    prefab duplicate
  3. Rename the new prefab if desired (we have renamed it to boxHead).
  4. Open the prefab to edit (using the method appropriate for your version of Unity).
  5. Expand the prefab and remove the existing model mesh.
    prefab delete existing model
  6. Add the boxHead v2 model to the prefab.
    prefab add new model
  7. Select the prefab root.
  8. Add SALSA LipSync to the prefab.
    prefab root add salsa
  9. Configure SALSA:

    • References Section:

      • Link the Speaker object in the prefab (this is where the AudioSource is configured on the prefab) to the SALSA AudioSource reference.
      • Click the "Add QueueProcessor" button.
      • References should now be (configured) blue.
        references configured
      • Collapse the References section.

        NOTE: PhotonVoice2 does not, by default, stream local audio to the local client. Therefore, local client lipsync is not available (by default). Crazy Minnow Studio will soon release an add-on script that will facilitate local avatar lipsync capability. This script is only an example and may not fulfill the requirements of all scenarios. If this does not meet your requirements, you are free to modify it to better meet your needs. This add-on script will require an update to SALSA LipSync Suite v2. Stay tuned for more info.

    • Viseme Configuration Section:

      • Click "New Viseme" to create the first viseme.
      • Rename the viseme if desired (we are calling it saySmall).
      • On the first component, drag the boxHead model reference from the scene to the SkinnedMesh slot under the Shape controller for the viseme expression component.
        viseme config
      • Select the saySml blendshape.
        viseme config2

      • Click "New Viseme" to create the second viseme.

      • Rename the viseme if desired (we are calling it sayMedium).
      • Select the sayMed blendshape.

      • Click "New Viseme" to create the third viseme.

      • Rename the viseme if desired (we are calling it sayLarge).
      • Select the sayLrg blendshape.
        viseme config2

      • Select "Trigger Display Mode".

      • Select Curve.
        viseme config3
    • SALSA is now configured.

  10. Save the prefab using the method required for your Unity version.

Configure the Photon Voice Demo Scene

  1. Open the demo scene:
    Assets > Photon > PhotonVoice > Demos > DemoVoicePun > DemoVoicePun-Scene
  2. Save the scene as a new scene (this backs the scene up so we can work on it without damaging the original example scene).
  3. Select the PUN GameObject.

    • Under the Character Instantiation component, configure the new boxHead prefab.
      • Change the Size to 2.
      • Replace the existing entries with the boxHead prefab created in the above section.
        scene config
  4. The scene is now configured -- save the scene.

Configure Photon PUN Services

  1. Run the PUN Wizard to configure your services.
    pun wizard
  2. Enter your AppId
    pun wizard app id
  3. Enter your Voice App ID
    pun wizard voice app id

Test the Scene

  1. Open Build Settings.
  2. Add the scene to the 'Scenes in Build'.
  3. Build the project.
  4. Launch the built project.
    Enable:
    • Voice Detect
    • Transmit
      scene options
  5. Next, run the Editor project.
  6. While speaking in the microphone, the avatar from the Build instance will lip-sync in the Editor Instance.
    scene options

NOTE: Enable Transmit in both instances to lip-sync to the remote instance. Photon Voice does not stream audio to the local relative instance by default. Crazy Minnow Studio will soon release an add-on script that will present a usable example of how to facilitate local avatar lipsync capability. This script is only an example and may not fulfill the requirements of all scenarios. This add-on script will require an update to SALSA LipSync Suite v2. Stay tuned for more info.