A CrossFormer-Based Hashing method
We using COCO2014 NUSWIDE-TC21 MIRFlikcr25k in our experiment:
COCO2014
链接: https://pan.baidu.com/s/1D88pWPcmGRgVBra2lB3MwQ?pwd=1023 提取码: 1023
NUSWIDE-TC21
链接: https://pan.baidu.com/s/1OZk-A8sohjl69oSG0reJWg?pwd=1024 提取码: 1024
MIRFlickr25K
链接: https://pan.baidu.com/s/1WnUxKbZ4cIwxIkFOgyPjuQ 提取码: 1025
Reference for how to divide the data set :
This repo uses apex and timm packages
To avoid environmental conflict we provide a well-installed conda environment you just need to follow the step shown below
Using this Python interpreter to avoid environmental conflict
链接:https://pan.baidu.com/s/1rMGOdoAi8kZxWZAWsFgh9A?pwd=1026 提取码:1026 --来自百度网盘超级会员V4的分享
@article{guo2021cmt,
title={Cmt: Convolutional neural networks meet vision transformers},
author={Guo, Jianyuan and Han, Kai and Wu, Han and Xu, Chang and Tang, Yehui and Xu, Chunjing and Wang, Yunhe},
journal={arXiv preprint arXiv:2107.06263},
year={2021}
}
@article{RelaHash,
author={Minh, Pham Vu Thai and Viet, Nguyen Dong Duc and Son, Ngo Tung and Anh, Bui Ngoc and Jaafar, Jafreezal},
journal={IEEE Access},
title={RelaHash: Deep Hashing With Relative Position},
year={2023},
volume={11},
number={},
pages={30094-30108},
doi={10.1109/ACCESS.2023.3259104}
}
@inproceedings{wang2021crossformer,
title = {CrossFormer: A Versatile Vision Transformer Hinging on Cross-scale Attention},
author = {Wang, Wenxiao and Yao, Lu and Chen, Long and Lin, Binbin and Cai, Deng and He, Xiaofei and Liu, Wei},
booktitle = {International Conference on Learning Representations, {ICLR}},
url = {https://openreview.net/forum?id=_PHymLIxuI},
year = {2022}
}