Skip to content

EvoPose2D is a two-stage human pose estimation model that was designed using neuroevolution. It achieves state-of-the-art accuracy on COCO.

License

Notifications You must be signed in to change notification settings

wmcnally/evopose2d

Repository files navigation

EvoPose2D

Source code for EvoPose2D: Pushing the Boundaries of 2D Human Pose Estimation using Neuroevolution. Implemented using Python 3.7 and TensorFlow 2.3.

Proof of results: The json files containing the results reported in the paper can be found here. These results were generated using the bfloat16 models.

Getting Started

  1. If you haven't already, install Anaconda or Miniconda
  2. Create a new conda environment with Python 3.7: $ conda create -n evopose2d python==3.7
  3. Clone this repo: $ git clone https://github.com/wmcnally/evopose2d.git
  4. Install the dependencies using $ pip install -r requirements.txt
  5. Download the 2017 COCO training and validation images and extract.
  6. Download the 2017 COCO annotations and extract to the same folder.
  7. Download the validation person detections (from HRNet repo).
  8. Use write_tfrecords.py and the detection json to generate the training and validation TFRecords. If using Cloud TPU, upload the TFRecords to a Storage Bucket.

Demo

  1. Download a pretrained float32 model from here and place it in a new models directory.
  2. Run $ python3 demo.py -c [model_name].yaml -p [/path/to/coco/dataset] The test image result will be written to the main directory. Prediction visualization for image id 785 using evopose2d_M_f32 model:

alt text

Validation

Download a pretrained model from here and place it in a new models directory. The bfloat16 models run best on TPU, and might be slow on GPU.

Modify the paths to the TFRecords and validation annotation json in the yaml file of the model you downloaded. If using GPU, change the validation batch size to suit your total GPU memory.

GPU: $ python3 validate.py -c [model_name].yaml

Cloud TPU: $ python3 validate.py -c [model_name].yaml --tpu [tpu_name]

Training

Modify the paths to the TFRecords and validation annotation json in the yaml file of the model you want to train. If using GPU, change the training and validation batch sizes to suit your total GPU memory and set bfloat16 to 'false'.

GPU: $ python3 train.py -c [model_name].yaml

Cloud TPU: $ python3 train.py -c [model_name].yaml --tpu [tpu_name]

Neuroevolution

Modify the paths to the TFRecords and validation annotation json in E3.yaml.

To run on 4 Cloud TPUs, e.g., with names [node-1, node-2, node-3, node-4]: $ python3 ga.py -c E3.yaml -a 1 2 3 4

See ga.py arguments for more details.

Acknowledgements

Hardware:

GitHub Repositories:

About

EvoPose2D is a two-stage human pose estimation model that was designed using neuroevolution. It achieves state-of-the-art accuracy on COCO.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages