SyncSink.wasm is webapplication to synchronize media files with shared audio. SyncSink matches and aligns shared audio and determines offsets in seconds. With these precise offsets it becomes trival to sync media files.
SyncSink is, for example, used to synchronize video files: when you have many video captures of the same event, the audio attached to these video captures is used to align and sync multiple (independently operated) cameras. Evidently, SyncSink can also synchronize audio captured from many (independent) microphones if some environmental sound is shared the recordings.
SyncSink.wasm is based on the Java SyncSink software. SyncSink can also be used for synchronization of data-streams. For those applications please see the article titled Synchronizing Multimodal Recordings Using Audio-To-Audio Alignment.
The first step is to extract, downmix and resample audio from the incoming media file. This is done with a wasm version of ffmpeg. If a video file with multiple audio streams enters the system, the first audio stream is used for synchronization.
The next step is to extract fingerprints. These fingerprints are extracted from the reference media file and all other media files. By aligning the fingerprints of the other files with the reference file, a rough offset is determined. The rough offset determines how much each ‘other’ file needs to shift to match the reference. The rough offset is accurate to about 8ms.
The last step improves the rough offset by calculating the crosscovariance between the refrence and other files. Since we already have a rough offset we know where audio is likely to match so we can reduce the amount of crosscovariance calculations, which are computationally intensive. In the ideal case the crosscovariance is stable and improves the offset up to audio-sample accuracy.
To use go to the SyncSink.wasm website and drag and drop your media files. Similarly to the screencapture above. If the same audio is found in the various media files a timebox plot appears with a calculated offset. The JSON file provides more insights in the found matches.
There is also a command line version of SyncSink.wasm. To see an example go to the examples/node
directory and call the following from the command line.
node --no-experimental-fetch sync.js
Some relevant reading material concerning SyncSink.wasm.
- Six, Joren and Leman, Marc “Synchronizing Multimodal Recordings Using Audio-To-Audio Alignment” (2015)
- Six, Joren and Leman, Marc Panako – A Scalable Acoustic Fingerprinting System Handling Time-Scale and Pitch Modification (2014)
- Wang, Avery L. An Industrial-Strength Audio Search Algorithm (2003)
- Ellis, Dan and Whitman, Brian and Porter, Alastair Echoprint – An Open Music Identification Service (2011)
- Sonnleitner, Reinhard and Widmer, Gerhard Quad-based Audio Fingerprinting Robust To Time And Frequency Scaling (2014)
The SyncSink.wasm software was developed at IPEM, Ghent University by Joren Six.
- SyncSink The original SyncSink Java software this work is based off.
- PFFFT A pretty fast FFT library. BSD licensed.
- PFFFT.wasm A wasm version of pffft.
- ffmpeg ‘A complete, cross-platform solution to record, convert and stream audio and video.’
- ffmpeg.audio.wasm A wasm version of ffmpeg with a focus on audio extraction.
- chart.js A javascript charting library used in the UI.
If you use the synchronization algorithms for research purposes, please cite the following work:
@article{six2015multimodal,
author = {Joren Six and Marc Leman},
title = {{Synchronizing Multimodal Recordings Using Audio-To-Audio Alignment}},
issn = {1783-7677},
volume = {9},
number = {3},
pages = {223-229},
doi = {10.1007/s12193-015-0196-1},
journal = {{Journal of Multimodal User Interfaces}},
publisher = {Springer Berlin Heidelberg},
year = 2015
}