Skip to content

Latest commit

 

History

History
55 lines (39 loc) · 2.35 KB

README_old.md

File metadata and controls

55 lines (39 loc) · 2.35 KB

yolov7-pose

Implementation of "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors"

Pose estimation implimentation is based on YOLO-Pose.

Dataset preparison

[Keypoints Labels of MS COCO 2017]

Training

yolov7-w6-person.pt

python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 train.py --data data/coco_kpts.yaml --cfg cfg/yolov7-w6-pose.yaml --weights weights/yolov7-w6-person.pt --batch-size 128 --img 960 --kpt-label --sync-bn --device 0,1,2,3,4,5,6,7 --name yolov7-w6-pose --hyp data/hyp.pose.yaml

Deploy

TensorRT:https://github.com/nanmi/yolov7-pose

Testing

yolov7-w6-pose.pt

python test.py --data data/coco_kpts.yaml --img 960 --conf 0.001 --iou 0.65 --weights yolov7-w6-pose.pt --kpt-label

Citation

@article{wang2022yolov7,
  title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
  author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},
  journal={arXiv preprint arXiv:2207.02696},
  year={2022}
}

Acknowledgements

Expand