-
Notifications
You must be signed in to change notification settings - Fork 16
YOLOV4 Inference
Here, you will see how to get started with quickai for YOLOV4 inferance. We will be using pre-trained weights to perform inference!
Start by cloning the repo. Once you have done that, download the checkpoints/
folder from the link on the README and copy it to the examples/
folder in the repo. In the examples folder, go ahead and run yolov4_image.py
. After some time depending on your system configuration, you will see a window pop up with an image that has boxes on it. These boxes are the detections. Now, lets look at the code for yolov4_image.py
.
from quickai import YOLOV4
YOLOV4(media_type="image", image="kite.jpg", weights="./checkpoints/yolov4-416")
In the first line, we import the YOLOV4
method from quickai. In the second line, we call that method with the media_type
as image, image
as kite.jpg, and weights
as ./checkpoints/yolov4-416. The media_type
is because we are performing object detection on an image. The image
parameter is for the path of the image, and the weights
parameter is the path to the weights of the model.
The same basic code structure is used to perform YOLOV4 inference on a video or live webcam feed:
from quickai import YOLOV4
YOLOV4(media_type="video", image="road.mp4", weights="./checkpoints/yolov4-416")
All the parameters are the same, with the exception of obviously the media_type
being set to video. If you want to perform inference on a webcam, you can just set the image
parameter to an integer value equal to the webcam index that would be used to load the webcam feed in OpenCV. If you have one webcam in your system, the value would be 0, and so on.
View package on PyPi