Rhubarb Lip Sync allows you to quickly create 2D mouth animation from voice recordings. It analyzes your audio files, recognizes what is being said, then automatically generates lip sync information. You can use it for animating speech in computer games, animated cartoons, or any similar project.
Rhubarb Lip Sync integrates with the following applications:
-
Adobe After Effects (see below)
-
Moho and OpenToonz (see below)
-
Spine by Esoteric Software (see below)
-
Vegas Pro by Magix (see below)
-
Visionaire Studio (see external link)
In addition, you can use Rhubarb Lip Sync’s command line interface (CLI) to generate files in various output formats (TSV/XML/JSON).
You can use Rhubarb Lip Sync to animate dialog right from Adobe After Effects. For more information, follow this link or see the directory extras/AdobeAfterEffects
.
Rhubarb Lip Sync can create .dat switch data files, which are understood by Moho and OpenToonz. You can set the frame rate using the --datFrameRate
option; to control the shape names, use the --datUsePrestonBlair
flag. For more details, see Command-line options.
Rhubarb Lip Sync for Spine is a graphical tool that allows you to import a Spine project, perform automatic lip sync, then re-import the result into Spine. For more information, follow this link or see the directory extras/EsotericSoftwareSpine
of the download.
Rhubarb Lip Sync also comes with two plugin scripts for Vegas Pro (previously Sony Vegas). For more information, follow this link or see the directory extras/MagixVegas
of the download.
Rhubarb Lip Sync can use between six and nine different mouth positions. The first six mouth shapes (Ⓐ-Ⓕ) are the basic mouth shapes and the absolute minimum you have to draw for your character. These six mouth shapes were invented at the Hanna-Barbera studios for shows such as Scooby-Doo and The Flintstones. Since then, they have evolved into a de-facto standard for 2D animation, and have been widely used by studios like Disney and Warner Bros.
In addition to the six basic mouth shapes, there are three extended mouth shapes: Ⓖ, Ⓗ, and Ⓧ. These are optional. You may choose to draw all three of them, pick just one or two, or leave them out entirely.
Ⓐ |
Closed mouth for the “P”, “B”, and “M” sounds. This is almost identical to the Ⓧ shape, but there is ever-so-slight pressure between the lips. |
|
---|---|---|
Ⓑ |
Slightly open mouth with clenched teeth. This mouth shape is used for most consonants (“K”, “S”, “T”, etc.). It’s also used for some vowels such as the “EE” sound in bee. |
|
Ⓒ |
Open mouth. This mouth shape is used for vowels like “EH” as in men and “AE” as in bat. It’s also used for some consonants, depending on context. This shape is also used as an in-between when animating from Ⓐ or Ⓑ to Ⓓ. So make sure the animations ⒶⒸⒹ and ⒷⒸⒹ look smooth! |
|
Ⓓ |
Wide open mouth. This mouth shapes is used for vowels like “AA” as in father. |
|
Ⓔ |
Slightly rounded mouth. This mouth shape is used for vowels like “AO” as in off and “ER” as in bird. This shape is also used as an in-between when animating from Ⓒ or Ⓓ to Ⓕ. Make sure the mouth isn’t wider open than for Ⓒ. Both ⒸⒺⒻ and ⒹⒺⒻ should result in smooth animation. |
|
Ⓕ |
Puckered lips. This mouth shape is used for “UW” as in you, “OW” as in show, and “W” as in way. |
|
Ⓖ |
Upper teeth touching the lower lip for “F” as in for and “V” as in very. This extended mouth shape is optional. If your art style is detailed enough, it greatly improves the overall look of the animation. If you decide not to use it, you can specify so using the |
|
Ⓗ |
This shape is used for long “L” sounds, with the tongue raised behind the upper teeth. The mouth should be at least far open as in Ⓒ, but not quite as far as in Ⓓ. This extended mouth shape is optional. Depending on your art style and the angle of the head, the tongue may not be visible at all. In this case, there is no point in drawing this extra shape. If you decide not to use it, you can specify so using the |
|
Ⓧ |
Idle position. This mouth shape is used for pauses in speech. This should be the same mouth drawing you use when your character is walking around without talking. It is almost identical to Ⓐ, but with slightly less pressure between the lips: For Ⓧ, the lips should be closed but relaxed. This extended mouth shape is optional. Whether there should be any visible difference between the rest position Ⓧ and the closed talking mouth Ⓐ depends on your art style and personal taste. If you decide not to use it, you can specify so using the |
Rhubarb Lip Sync is a command-line tool that is currently available for Windows and OS X.
-
Download the latest release and unzip the file anywhere on your computer.
-
Call
rhubarb
, passing it an audio file as argument and telling it where to create the output file. In its simplest form, this might look like this:rhubarb -o output.txt my-recording.wav
. There are additional command-line options you can specify in order to get better results. -
Rhubarb Lip Sync will analyze the sound file, animate it, and create an output file containing the animation. If an error occurs, Rhubarb Lip Sync will instead print an error message to
stderr
and exit with a non-zero exit code.
The following command-line options are the most common:
Option | Description | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
<input file> |
The audio file to be analyzed. This must be the last command-line argument. Supported file formats are WAVE (.wav) and Ogg Vorbis (.ogg). |
||||||||||||||||||||
|
Specifies how Rhubarb Lip Sync recognizes speech within the recording. Options: Default value: |
||||||||||||||||||||
|
The export format. Options: Default value: |
||||||||||||||||||||
|
With this option, you can provide Rhubarb Lip Sync with the dialog text to get more reliable results. Specify the path to a plain-text file (in ASCII or UTF-8 format) containing the dialog contained in the audio file. Rhubarb Lip Sync will still perform word recognition internally, but it will prefer words and phrases that occur in the dialog file. This leads to better recognition results and thus more reliable animation. For instance, let’s say you’re recording dialog for a computer game. The script says: “That’s all gobbledygook to me.” But actually, the voice artist ends up saying “That’s just gobbledygook to me,” deviating from the dialog. If you specify a dialog file with the original line (“That’s all gobbledygook to me”), this will still allow Rhubarb Lip Sync to produce better results, because it will watch out for the uncommon word “gobbledygook”. Rhubarb Lip Sync will ignore the dialog file where it audibly differs from the recording, and benefit from it where it matches. It is always a good idea to specify the dialog text. This will usually lead to more reliable mouth animation, even if the text is not completely accurate. |
||||||||||||||||||||
|
As described in Mouth shapes, Rhubarb Lip Sync uses six basic mouth shapes and up to three extended mouth shapes, which are optional. Use this option to specify which extended mouth shapes should be used. For example, to use only the Ⓖ and Ⓧ extended mouth shapes, specify Default value: |
||||||||||||||||||||
|
The name of the output file to create. If the file already exists, it will be overwritten. If you don’t specify an output file, the result will be written to |
||||||||||||||||||||
|
Displays version information and exits. |
||||||||||||||||||||
|
Displays usage information and exits. |
||||||||||||||||||||
|
Only valid when using the Default value: 24 |
||||||||||||||||||||
|
Only valid when using the
Caution: This mapping is only applied when exporting, after the recording has been animated. To control which mouth shapes to use, use the Tip: For optimal results, make sure your mouth drawings follow the guidelines in the Mouth shapes section. This is easier if you stick to the alphabetic names instead of the Preston Blair names. The only situation where you need to use the Preston Blair names is when you’re using OpenToonz, because OpenToonz only supports the Preston Blair names. |
The following command-line options can be helpful in special situations, especially when automating Rhubarb Lip Sync.
Option |
Description |
|
By default, Rhubarb Lip Sync writes a number of progress messages to You can combine this option with the |
|
This option is useful if you want to integrate Rhubarb Lip Sync with another (possibly graphical) application. All status messages to |
|
Sets the log level for reporting to the console ( If Default value: |
|
Creates a log file with diagnostic information at the specified path. |
|
Sets the log level for the log file. Only events with the specified level or higher will be logged. Options: Default value: |
|
Rhubarb Lip Sync uses multithreading to speed up processing. By default, it creates as many worker threads as there are cores on your CPU, which results in optimal processing speed. You may choose to specify a lower number if you feel that Rhubarb Lip Sync is slowing down other applications. Specifying a higher number is not recommended, as it won’t result in any additional speed-up. Note that for short audio files, Rhubarb Lip Sync may choose to use fewer threads than specified. Default value: as many threads as your CPU has cores |
The first step in processing an audio file is determining what is being said. More specifically, Rhubarb Lip Sync uses speech recognition to figure out what sound is being said at what point in time. You can choose between two recognizers:
PocketSphinx is an open-source speech recognition library that generally gives good results. This is the default recognizer. The downside is that PocketSphinx only recognizes English dialog. So if your recordings are in a language other than English, this is not a good choice.
Rhubarb Lip Sync also comes with a phonetic recognizer. Phonetic means that this recognizer won’t try to understand entire (English) words and phrases. Instead, it will recognize individual sounds and syllables. The results are usually less precise than those from the PocketSphinx recognizer. The advantage is that this recognizer is language-independent. Use it if your recordings are not in English.
The output of Rhubarb Lip Sync is a file that tells you which mouth shape to display at what time within the recording. You can choose between three file formats — TSV, XML, and JSON. The following paragraphs show you what each of these formats looks like.
TSV is the simplest and most compact export format supported by Rhubarb Lip Sync. Each line starts with a timestamp (in seconds), followed by a tab, followed by the name of the mouth shape. The following is the output for a recording of a person saying 'Hi.'
0.00 X
0.05 D
0.27 C
0.31 B
0.43 X
0.47 X
Here’s how to read it:
-
At the beginning of the recording (0.00s), the mouth is closed (shape Ⓧ). The very first output will always have the timestamp 0.00s.
-
0.05s into the recording, the mouth opens wide (shape Ⓓ) for the “HH” sound, anticipating the “AY” sound that will follow.
-
The second half of the “AY” diphtong (0.31s into the recording) requires clenched teeth (shape Ⓑ). Before that, shape Ⓒ is inserted as an in-between at 0.27s. This allows for a smoother animation from Ⓓ to Ⓑ.
-
0.43s into the recording, the dialog is finished and the mouth closes again (shape Ⓧ).
-
The last output line in TSV format is special: Its timestamp is always the very end of the recording (truncated to a multiple of 0.01s) and its value is always a closed mouth (shape Ⓧ or Ⓐ, depending on your
extendedShapes
settings).
XML format is rather verbose. The following is the output for a person saying 'Hi,' the same recording as above.
<?xml version="1.0" encoding="utf-8"?>
<rhubarbResult>
<metadata>
<soundFile>C:\Users\Daniel\Desktop\av\hi\hi.wav</soundFile>
<duration>0.47</duration>
</metadata>
<mouthCues>
<mouthCue start="0.00" end="0.05">X</mouthCue>
<mouthCue start="0.05" end="0.27">D</mouthCue>
<mouthCue start="0.27" end="0.31">C</mouthCue>
<mouthCue start="0.31" end="0.43">B</mouthCue>
<mouthCue start="0.43" end="0.47">X</mouthCue>
</mouthCues>
</rhubarbResult>
The file starts with a metadata
block containing the full path of the original recording and its duration (truncated to a multiple of 0.01s). After that, each mouthCue
element indicates the start and end of a certain mouth shape, as explained for TSV format. Note that the end of each mouth cue is identical with the start of the following one. This is a bit redundant, but it means that we don’t need a special final element like in TSV format.
JSON format is very similar to XML format. The choice mainly depends on the programming language you use, which may have built-in support for one format but not the other. The following is the output for a person saying 'Hi,' the same recording as above.
{
"metadata": {
"soundFile": "C:\\Users\\Daniel\\Desktop\\av\\hi\\hi.wav",
"duration": 0.47
},
"mouthCues": [
{ "start": 0.00, "end": 0.05, "value": "X" },
{ "start": 0.05, "end": 0.27, "value": "D" },
{ "start": 0.27, "end": 0.31, "value": "C" },
{ "start": 0.31, "end": 0.43, "value": "B" },
{ "start": 0.43, "end": 0.47, "value": "X" }
]
}
There is nothing surprising here; everything said about XML format applies to JSON, too.
Use the --machineReadable
command-line option to enable machine-readable status messages. In this mode, each line printed to stderr
will be an object in JSON format. Every object contains the following:
-
Property
type
: The type of the event. Currently, one of"start"
(application start),"progress"
(numeric progress),"success"
(successful termination),"failure"
(unsuccessful termination), and"log"
(a log message without structured information). -
Event-specific structured data. For instance, a
"progress"
event contains the propertyvalue
with a numeric value between 0.0 and 1.0. -
Property
log
: A log message describing the event, plus severity information. If you aren’t interested in the structured data, you can display this as a fallback. For instance, a"progress"
event with the structured information"value": 0.69
may contain the following redundant log message:"Progress: 69%"
.
You can combine this option with the consoleLevel
option. Note, however, that this only affects unstructured events of type "log"
(not to be confused with the log
property each event contains).
The following is an example output to stderr
from a successful run:
{ "type": "start", "file": "hi.wav", "log": { "level": "Info", "message": "Application startup. Input file: \"hi.wav\"." } }
{ "type": "progress", "value": 0.00, "log": { "level": "Trace", "message": "Progress: 0%" } }
{ "type": "progress", "value": 0.01, "log": { "level": "Trace", "message": "Progress: 1%" } }
{ "type": "progress", "value": 0.03, "log": { "level": "Trace", "message": "Progress: 3%" } }
{ "type": "progress", "value": 0.06, "log": { "level": "Trace", "message": "Progress: 6%" } }
{ "type": "progress", "value": 0.69, "log": { "level": "Trace", "message": "Progress: 68%" } }
{ "type": "progress", "value": 1.00, "log": { "level": "Trace", "message": "Progress: 100%" } }
// Result data, printed to stdout...
{ "type": "success", "log": { "level": "Info", "message": "Application terminating normally." } }
The following is an example output to stderr
from a failed run:
{ "type": "start", "file": "no-such-file.wav", "log": { "level": "Info", "message": "Application startup. Input file: \"no-such-file.wav\"." } }
{ "type": "failure", "reason": "Error processing file \"no-such-file.wav\".\nCould not open sound file \"no-such-file.wav\".\nNo such file or directory", "log": { "level": "Fatal", "message": "Application terminating with error: Error processing file \"no-such-file.wav\".\nCould not open sound file \"no-such-file.wav\".\nNo such file or directory" } }
Note that the output format adheres to SemVer. That means that the JSON output created after a minor upgrade will still be compatible. Note, however, that the following kinds of changes may occur at any time, because I consider them non-breaking:
-
Additional types of progress events. Just ignore those events whose types you do not know or use their unstructured
log
property. -
Additional properties in any object. Just ignore properties you aren’t interested in.
-
Changes in JSON formatting, such as a re-ordering of properties or changes in whitespaces (except for line breaks — every event will remain on a singe line)
-
Fewer or more events of type
"log"
or changes in the wording of log messages
Rhubarb Lip Sync uses Semantic Versioning (SemVer) for its command-line interface. For general information on Semantic Versioning, have a look at the official SemVer website.
As a rule of thumb, everything you can use through the command-line interface adheres to SemVer. Everything else (i.e., the source code, integrations with third-party software, etc.) does not.
Have you created something great using Rhubarb Lip Sync? — Let me know on Twitter or send me an email at dwolf@dannad.de!
Do you need help? Have you spotted a bug? Do you have a suggestion? — Create an issue!
JetBrains have been kind enough to supply me with a free Open Source license of ReSharper C++.