This application uses YoloV8 to detect objects within images across tiles. Total runtime is printed for comparison between different model architectures.
- Model can be converted to ONNX format for faster inference speed on CPU/GPU.
- Implemented parallel computing to optimize postprocessing on multiple tiles.
- Introduced a buffer around each image to enhance detection accuracy.
- Set up a Docker environment to provide the app as a standalone executable.
Install requirements
pip install -r requirements.txt
To run with pytorch. Default GPU, fallback to CPU.
python main.py --input_path "your/input/path/" --output_path "your/output/path/"
To run with ONNX using CPU only
python main.py --input_path "your/input/path/" --output_path "your/output/path/ --use_onnx"
To run with ONNX using GPU, fallback to CPU
python main.py --input_path "your/input/path/" --output_path "your/output/path/ --use_onnx --onnx_gpu"
Use PyTorch, runs on GPU, fallback to CPU
python main.py -i test/ --model_name yolov8x --class_yaml coco8.yaml
Use onnx, runs on CPU
python main.py -i test/ --model_name yolov8x --class_yaml coco8.yaml --use_onnx
Use onnx, runs on GPU, fallback to CPU
python main.py -i test/ --model_name yolov8x --class_yaml coco8.yaml --use_onnx --onnx_gpu