FAQ: SALSA LipSync Suite v2
Please check this Frequently Asked Questions list for answers to common problems/questions. If you don't find your answer here, please check the forum site for SALSA LipSync Suite v2, or post a new question, comment, or feature request.
What is SALSA LipSync Suite?
It's a toolset built for the Unity3D game engine, to simplify the process of adding lip synchronization, emotes, and random and programatic eye movement to 2D and 3D game characters.
Who is it for?
This asset is designed for anyone who uses the Unity3D game engine, wanting to spice up their characters with automatic lip synchronization, emotion, natural-looking eye-movement, and eye-object-tracking, or full programmatic control.
What are the License Requirements?
SALSA LipSync Suite v2 requires one (1) license per developer. The product may be used for any number of character models and/or projects. There are no sales limits for your final product(s).
What languages does SALSA LipSync work with?
SALSA lip-sync processing is language-agnostic. It will work with any language or sound. It analyzes amplitude data in the waveform, translating the information into a blending of configured visemes for fast, easy, and great-looking lip-sync approximation. The Custom Inspectors and all script documentation for SALSA is in English.
When will it be available in the Unity Asset Store, and how much will it cost?
SALSA LipSync Suite v2 is available now in the Unity Asset Store ($39). Previous owners of SALSA with RandomEyes can upgrade for a reduced price!
Where do I get technical information or support?
Does SALSA LipSync Suite work with armatures/bones?
While blendshapes are a great way to create facial movment, SALSA LipSync Suite now supports blendshapes, bones, sprites, UGUI sprites, textures, and materials. You may mix and match these as desired for each viseme or emote configuration.
What is required for my 3D model to work with SALSA LipSync Suite?
Pretty much any model that uses blendshapes or bones for facial animation can be configured to work in the SALSA LipSync Suite. We have OneClick setups for several common model creation systems (see the features section). The requirement is a sufficient amount of control be available in the mouth for lip-synchronization and ample control over other facial areas (i.e. brows, lids, nose) for emotes, blinking, etc.
Does SALSA LipSync Suite work with <insert model system here>?
There are several popular model creation systems that we have developed supported OneClicks for. SALSA LipSync Suite v2 is a very flexible system that technically should be able to work with nearly any model that supports sufficient facial animations. If there is not a supported OneClick for your model-of-choice, it does not mean it will not work. It simply means the model would need to be manually configured. We have listed all model systems that we currently have documented workflows for. If your model utilizes blendshapes or bones for mouth and facial animation, odds are you can configure the SALSA LipSync Suite to work with it.
NOTE: If your model does not have sufficient facial animation capabilities, you would have to add blendshapes or bones to your model to create animatable expressions.
Microphone Input is not working in Android, iOS, or MacOS?
Microphone input will likely require a specific microphone to be specified in Andriod, iOS, or MacOS. These platforms do not work well by supplying the default (null) microphone name.
micInput will target the default microphone by default (which is a null string). This works perfectly fine on a PC. However, it looks like Android has taken on the same behavior as iOS and MacOS in that it requires a specified microphone to work. You will need to specify the microphone on your device.
Using the micInput API, you can do the following: