This is an implementation of YOLOv3 in tensorflow. Although this can work for other variants of YOLO as well with minute changes.
And obviously Markdown Cheatsheet for this beautiful README
- YOLOv3 C to tensorflow conversion (inference only)
- Inference script
- Training pipeline
- Speed evaluation
- Training on raccoon dataset
- Subdivisions implentation - aids in training big batches with small GPU vRAM
- Multiple learning rate scheduler with burn-in
- Protobuf file generation and inference
[ ] Multi-resolution training[ ] Multi resolution inference[ ] Multi-GPU training- Fine-tuning on any dataset
- Focal Loss Note - A special case of focal loss has been implemented where alpha=0.5 and gamma=2.
- GIoU training
- mAP evaluation
- Training on Pascal VOC dataset
Suggestions and pull requests are most welcome
When running YOLOv3-608 model on a 1080Ti I am getting around 19FPS
When running YOLOv3-608 model on a 1050Ti I am getting around 6FPS
Make a virtual environment and install all the dependencies
I am using virtualenv.
Run the following commands
virtualenv env -p python3
source env\bin\activate
pip install -r requirements.txt
python inference.py path_to_the_image_directory path_for_saving_the_results --darknet_model 1`
The above command runs the pretrained model.
Now this will be some work..
Try to find an object detection dataset which is having annotations in VOC format as my annotation parsing script works for only that. (Sorry COCO lovers)
I suggest raccoon-dataset provided by experiencor. (Who is having a great implementation of YOLOv3 in keras.)
Make a directory and put your dataset with two folders, one holding images and the other holding the corresponding annotations. There must be a one-to-one correspondence by file name between images and annotations.
I suggest the following structure.
+ dataset
|
|___
+ dataset_name
|
|___
| + images
| |
| |___ + all the image files
|___
+ annotations
|
|___ + all the annotation files
Make a dataset-name_classes.txt file inside the model_data folder, having one class of the dataset in one line. A sample file named sample_classes.txt has been provided for reference.
The configuration file is a python .py file containing variables, updation of which is pretty self explainatory by reading the comments provided.
If you are doing this step, update the variable anchors_path in your config.py file to point to the location where you want to save the newly generated anchors.
Run the following command
python k-means.py
Running the following command will do some darkmagic as done in darknet (pun intended) and will start training the model.
python train.py
You might get an error while training for the first time, I am still working to resolve it. (Suggestions are welcome, I am still a noob.)
Run the command again and you are good to go.
python inference.py path_to_the_image_directory path_for_saving_the_results`