This is the PyTorch implementation for CMPC, as described in our paper:
Unsupervised Voice-Face Representation Learning by Cross-Modal Prototype Contrast
@inproceedings{zhu2022unsupervised,
title={Unsupervised Voice-Face Representation Learning by Cross-Modal Prototype Contrast},
author={Zhu, Boqing and Xu, Kele and Wang, Changjian and Qin, Zheng and Sun, Tao and Wang, Huaimin and Peng, Yuxing},
booktitle={Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, {IJCAI-22}},
pages={3787--3794},
year={2022},
month={7}
}
We also provide the pretrained model and testing resources.
- torch==1.7.0+cu110
- matplotlib==3.4.3
- pykeops==1.5
- pandas==1.1.3
- librosa==0.6.2
- Pillow==9.0.1
- PyYAML==6.0
- scikit_learn==1.0.2
CID | CMPC |
---|
In order to speed up the iteration of training, we extract the logmel features of voice data through pre-processing.
>> cd experiments/cmpc
>> python data_transform.py --wav_dir {directory-of-the-wav-file} --logmel_dir {destination-path}
The configurations are written in the CONFIG.yaml file, which can be changed according to your needs, such as the path information. The unsupervised training process can begin as:
>> python train.py CONFIG.yaml
Experiments on three evalution protocals: matching, verification and retrieval. The '--ckp_path' could be the path of downloaded model or your trained model.
>> python matching.py CONFIG.yaml --ckp_path {checkpoint path}
>> python verification.py CONFIG.yaml --ckp_path {checkpoint path}
>> python retrieval.py CONFIG.yaml --ckp_path {checkpoint path}
Matching, verification and retrieval testing data is released at ./data directory.