-
Notifications
You must be signed in to change notification settings - Fork 129
Object Detection (YOLOv2)
YOLOv2 is an object detection network developed by Redmon & Farhadi, which identifies objects in images and draws bounding boxes around them. Here, we have adapted a Keras version of YoloV2. While Yolo can be used for any object detection task, we demonstrate how the notebook can be used on a hand-labelled example dataset of migrating cells where cells are identified as elongated, rounded, dividing or spread-out cells.
Our pix2pix notebook is based on the following paper:
-
The original source code of YoloV2 can be found here: https://pjreddie.com/darknet/yolov2/
-
In this notebook we use a version of YoloV2 adapted for Keras the source code of which can be found here: https://github.com/experiencor/keras-yolo2
Please also cite this original paper when using or developing our notebook.
Training an object detection notebook requires a dataset with annotations which means images with objects identified and labelled by a human. To train on a custom dataset such as specific cells types requires hand-annotating a dataset. The training dataset will then consist of the raw images as inputs and as targets the corresponding files containing the coordinates of all the bounding boxes and classifications in a given image. To use this notebook these target files need to be .xml files in the PASCAL VOC format. To create such a dataset on custom examples, we used a simple web-tool, makesense.ai, which allows to upload images (which need to be .jpg or .png format) and label them in a simple GUI. To replicate this on your own dataset, follow these steps:
- Go to makesense.ai and click on 'Get Started'
- Upload your images - click on the main box to browse your files (need to be .png or .jpg, not .tif)
- When your images are uploaded select 'Object Detection'
- Create a labels list with the names of the classes you want to identify in your dataset. You can add these classes by clicking on the '+' in the top left corner. (If you forget something you can always add more labels later by clicking on the ''Update Label Names' on the top and then on '+' in the dialogue box.)
- When you have finished your labels list, select 'Going on my own' - and leave the boxes unchecked.
- Now you can start labelling your image. Draw bounding boxes with a cursor, and then select the label name, on the right-hand side, by clicking on 'Select Label' and then choosing from the dropdown list.
- When all images are labelled to satisfaction, click on 'Export Labels' on the top right and select A .zip package containing files in VOC XML format. Leave the other boxes unchecked. Then click export.
-
The labels will have the name of the original image file with an .xml suffix.
-
Put your source images and target annotations in separate folders, upload them to your drive and you're ready to start with the training.
Network | Link to example training and test dataset | Direct link to notebook in Colab |
---|---|---|
YOLOv2 | here |
or:
To train YOLOv2 in Google Colab:
-
Download our streamlined ZeroCostDL4Mic notebooks
-
Open Google Colab
-
Once the notebook is open, follow the instructions.
Main:
- Home
- Step by step "How to" guide
- How to contribute
- Tips, tricks and FAQs
- Data augmentation
- Quality control
- Running notebooks locally
- Running notebooks on FloydHub
- BioImage Modell Zoo user guide
- ZeroCostDL4Mic over time
Fully supported networks:
- U-Net
- StarDist
- Noise2Void
- CARE
- Label free prediction (fnet)
- Object Detection (YOLOv2)
- pix2pix
- CycleGAN
- Deep-STORM
Beta notebooks
Other resources: