The FaceSync toolbox provides 3D blueprints for building the head-mounted camera setup described in our paper. The toolbox also provides functions to automatically synchronize videos based on audio, manually align audio, plot facial landmark movements, and inspect synchronized videos to graph data.
To install (for osx or linux) open Terminal and type
pip install facesync
or
git clone https://github.com/jcheong0428/facesync.git
then in the repository folder type
python setup.py install
For full functionality, FACESYNC requires ffmpeg and the libav library.
Linux
sudo apt-get install libav-tools
OS X
brew install ffmpeg
brew install libav
also requires following packages:
- numpy
- scipy
You may also install these via
pip install -r requirements.txt
- Extract Audio from Target Video
- Find offset with Extracted Audio
- Trim Video using Offset. *If you need to resize your video, do so before trimming. Otherwise timing can be off.
from facesync.facesync import facesync
# change file name to include the full
video_files = ['path/to/sample1.MP4']
target_audio = 'path/to/cosan_synctune.wav'
# Intialize facesync class
fs = facesync(video_files=video_files,target_audio=target_audio)
# Extracts audio from sample1.MP4
fs.extract_audio()
# Find offset by correlation
fs.find_offset_corr(search_start=14,search_end=16)
print(fs.offsets)
# Find offset by fast fourier transform
fs.find_offset_fft()
print(fs.offsets)
%matplotlib notebook
from facesync.utils import AudioAligner
file_original = 'path/to/audio.wav'
file_sample = 'path/to/sample.wav'
AudioAligner(original=file_original, sample=file_sample)
%matplotlib notebook
from facesync.utils import ChangeAU, plotface
changed_face = ChangeAU(aulist=['AU6','AU12','AU17'], au_weight = 1.0)
ax = plotface(changed_face)
import facesync.utils as utils
%matplotlib notebook
utils.VideoViewer(path_to_video='path/to/video.mp4', data_df = fexDataFrame)
Please cite the following paper if you use our head-mounted camera setup or software.