Overview

The SALSA and Dissonance teams have worked together to come up with an integration solution that is simple and works great! PLEASE NOTE: this documentation is new and is likely to be updated frequently.

Dissonance supports pretty much any networking system, making it super flexible and compatible with nearly any project. In this tutorial, we will specifically utilize Unity's Networking system (as our transport mechanism), Of course, we will also be using Dissonance Voice Chat, SALSA Lip-Sync, and the free SalsaDissonanceLink add-on to create a lip-sync'd voice-chat project. Other networking options should work, but have not been completely tested.

UPDATE: 2017-05-05: Now supports local lip-sync (see requirements sections).
UPDATE: 2017-04-27: Now supports Salsa2D as well as Salsa3D. This demonstration will focus on Salsa3D; however, the workflow will be the same for either platform.

Requirements

The SALSA and Dissonance asset systems have been updated to be compatible with each other for voice-chat lip-synchronization. As such, you will need to get the latest versions of SALSA and Dissonance to hook this all up:

  • SALSA Lip-Sync v1.5.0+
  • Dissonance Voice Chat v1.0.6+
    Update: [2017-05-05] Local player lip-sync requires Dissonance Voice Chat v1.0.9+. Dissonance integration functionality has changed with v1.0.9+ -- 3rd party integrations are no longer distributed with the Unity Asset Store product and are instead downloaded from the Dissonance support website. Please refer to the readme file included with the Dissonance Voice Chat asset.
  • Update: [2017-05-05] The free SalsaDissonanceLink v0.7.0-beta add-on -- see release notes below.
  • A known working microphone.
  • boxHead.v2 model [optional]

Installation

  1. Import SALSA Lip-Sync into your project and please familiarize yourself with SALSA using the Quickstart Instructions for SALSA Lip-Sync.
    NOTE: We will not be using the 'Example' resources from SALSA in this tutorial (see image). You can uncheck it to save space/time.
    SALSA Import Settings
  2. Import Dissonance Voice Chat and familiarize yourself with Dissonance using the Dissonance Voice Chat - Getting Started guide.

    Update [2017-05-05] NOTE: This process has changed. Each integration package is now available separately from the Dissonance support website. Please see the readme included with Dissonance for more information.

    Open Dissonance Integrations Window

    Dissonance Integrations Window

For this tutorial, we are only importing the UNet_HLAPI integration add-on (see image and refer to the UNet_HLAPI Getting Started guide).
Dissonance Import Settings

  1. After both SALSA and Dissonance have been imported into your project, download the SalsaDissonanceLink add-on and import it into your project.
    NOTE: You will need to provide the Invoice Number you received for SALSA to download the add-on. (Your invoice number may be the same for both products if they were purchased together.)
    SalsaDissonanceLink Import
  2. Import the new boxHead.v2 model. [optional]
    boxHead.v2 Import

Setting up Voice-Chat and Lip-Sync

Let's break this down into three steps:

  1. Create a spawnable prefab for the player characters.
  2. Create the Dissonance manager object.
  3. Create the Unity Networking manager object.

After all of the pieces are created, we can test our scene.

Build a Player Prefab

To properly use character models in a networked setting, the models need to be spawned/instantiated at run-time (after the network services have started). In this Unity Networking provides a simple, no-frills interface to spawn characters into a networked scene upon the client's connection to the server. The following steps demonstrate creating a prefab from the SALSA boxHead character (ensure you have already installed SALSA, Dissonance, and the linking add-on -- see Requirements and Installation steps above):

  1. Drag the boxHead model into a blank scene.
    NOTE: If using boxHead.v2, we have purposefully rotated the model back on the X-axis. When you drag him into your scene he will be facing up. Next, reset the X-rotation value to 0 and the model will be properly oriented.
    Drag boxHead into Scene

  2. Add the following components to the new player-prefab object:

    • Add Salsa3D from the component menu (if your project is 2D, you may use Salsa2D): Crazy Minnow Studio > Salsa3D: (show me)

      Remember: It is necessary to configure SALSA. For this boxHead implementation, we can simply click Auto-Link to grab the recommended shape names. Additionally, some character models, such as FUSE, iClone, DAZ, MCS, etc. will require a different setup. Please review and implement the respective one-click or other configuration methods for your model type.

    • Add SalsaDissonanceLink from the component menu:
      Crazy Minnow Studio > Addons > SalsaDissonanceLink: (show me)

    Update [2017-05-05]: for local player lip-synchronization there is a new option that must be enabled.
    SalsaDissonanceLink Use Local Lip-Sync Option

    • Add a Dissonance Voice Player from the project list (remember, we are configuring a UNet system), navigate to: Dissonance > Integrations > UNet_HLAPI > HlapiPlayer and drag it to your player-prefab: (show me)
      NOTE: Adding the Dissonance Voice Player should automatically add a Network Identity component as well. If it does not, you will need to add it manually. From the component menu, select: Network > Network Identity.
      IMPORTANT: Enable Local Player Authority on the Network Identity.
  3. Optional Components:
    The following components are not necessary to establish voice-chat or lip-synchronization; however, RandomEyes will give the character more life and the player controller will allow us to position our players for better viewing.

    • Add RandomEyes from the component menu: Crazy Minnow Studio > RandomEyes3D

    Remember: RandomEyes also needs to be configured for your model. Since boxHead uses the standard names, we can Auto-Link the required shapes.

    • Add a player movement controller; we added the Dissonance Player Controller for movement.
      NOTE: If a movement controller is added, add a UNet Network Transform to support the player controller; otherwise, movement will not be translated across the network.
  4. Once configured, our player-prefab components look like this (screenshot)

  5. Create your player-prefab by dragging the scene hierarchy object to the Editor Project list (we saved our prefab in a prefabs folder under the boxHead model).
    Create player-prefab


Setup Dissonance

The Dissonance website has an excellent collection of documentation. Specifically, there is a quickstart section on setting up a Unity Networking scenario.

  1. Drag the DissonanceSetup prefab into the scene. Learn about Dissonance Comms.
    • From the project list, navigate to: Dissonance > Integrations > UNet_HLAPI > DissonanceSetup
  2. Add the following components to the DissonanceSetup object created in the previous step:

    • Add a UNet Network Identity from the component menu, select: Network > NetworkIdentity
    • Add a Dissonance Voice Receipt Trigger from the project list: Plugins > Dissonance > VoiceReceiptTrigger. learn more about the Voice Receipt Trigger
    • Add a Dissonance Voice Broadcast Trigger from the project list: Plugins > Dissonance > DissonanceBroadcastTrigger. learn more about the Voice Broadcast Trigger

      NOTE: For this demo, set the Activation Mode to 'Push To Talk'. This may not be desirable for your project; however, this setting helps us test the project using a single computer. It also requires the project to not be set for 'Run in background'.

    Configure PTT Mode

    • Also configure a Unity Input axis as "PTT" (we simply changed the Jump axis [space] to PTT). [Edit > Project Settings > Input]

    Configure PTT InputAxis

  3. Once configured, our DissonanceSetup object's components look like this (screenshot).

Setup Unity Networking

Setting up a 'Network Manager' is pretty flexible. We will add a new root object and call it 'Network Manager'. Confirm your components resemble ours.

  1. Create a new empty in the scene and rename it to 'Network Manager'
  2. Add the following components to the Network Manager object:

    • Add a Network Manager from the component menu, select: Network > NetworkManager

    • [Optional] To more easily facilitate starting and joining a server, we added a NetworkManagerHUD. From the component menu, select: Network > NetworkManagerHUD

  3. Configure the UNet Network Manager:

    • Link the player prefab object we saved in the project list, to the Network Manager's Spawn Info > Player Prefab slot.

    • Also link a Dissonance Player Tracker to the Network Manager's Spawn Info > Registered Spawnable Prefabs slot.

  4. Once configured, our Network Manager object components look like this (screenshot).

Build and test

To test, we will need to create and run a build of our project.

  1. Save your scene and project.
  2. Ensure the project is not set to Run in Background. Edit > Project Settings > Player Settings
  3. Open the Build Settings:
    • Click the Add Open Scenes button.
    • Click the Build and Run button
  4. Once the app runs, connect and serve as the host.
  5. Next, run the application in the Editor and connect as the client (either build can operate as host or client).
    NOTE: You will most likely need to adjust the positioning of your character models as they will likely spawn on top of each other.
    • If you added the optional Player Controller components, you can maneuver one or both avatars into more advantageous positions.
    • If you did not add the component, you can switch to scene mode in the editor and manually move one or both models.
  6. Ensure one of the windows has focus and press the button you configured in the Unity Input Manager as the PTT button.
  7. Speak into your microphone and the remote player in the other window should lip-sync to your voice.

API Information

SalsaDissonanceLink is a very simple linkage component. As such, there is zero configuration for the component itself (except for enabling local lip-sync). Under the hood, there is a single public property that may be of interest to developers.

public float Boost //min 0, max 1
This value allows you to set a non-clipping amplification level. The value is represented as a normalized, "more-is-more" setting (0 being no amplification and 1 being full Boost).

IMPORTANT: A setting of 1 will modify the data being fed to SALSA as a constant stream of max value, meaning your model's mouth will be wide open constantly, even if there is no audio signal. The setting defaults to .65f and is a practical value in our testing. Your particular needs or scenario may have different requirements.

NOTE: this is a (run-time) programmatic setting and cannot be set in the Editor (design-time).

Release Notes:

2017-05-05 -- version 0.7.0 (beta):
Adds:
-- Local lip-sync feature. (NOTE: Not enabled by default.)

2017-04-26 -- version 0.6.0 (beta):
Adds:
-- Salsa2D compatibility.
Changes:
-- Boost property set value to calculate (1 - value) to eliminate calc in update loop.
-- Default boost value to .35f to reflect calculation in property set vice loop.
-- Removed UnityEditor using reference.
Fixes:
-- Boost property now returns proper calculated value of Boost, vice boost.

2017-04-04 -- version 0.5.0 (beta):
-- Initial release.

Caveats for Use

  • Currently, lip-sync is limited only to remote players. Assuming the local player can see him/herself, lip-sync will not be visible.