Skip to content

Commit

Permalink
Add native build flag for ci build and fma,avx2 for cpu
Browse files Browse the repository at this point in the history
  • Loading branch information
xnorpx committed Dec 10, 2023
1 parent bd1a5f1 commit 98239a2
Show file tree
Hide file tree
Showing 3 changed files with 23 additions and 20 deletions.
2 changes: 2 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ jobs:
name: build-cuda
env:
RUST_BACKTRACE: 1
RUSTFLAGS: -C target-cpu=native
runs-on: ${{ matrix.os }}
strategy:
fail-fast: true
Expand Down Expand Up @@ -93,6 +94,7 @@ jobs:
name: build-test-cpu
env:
RUST_BACKTRACE: 1
RUSTFLAGS: -C target-cpu=native
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ jobs:
needs: ['create-release']
env:
RUST_BACKTRACE: 1
RUSTFLAGS: -C target-feature=+avx2,+fma,+f16c
ASSET:
ASSET_SUM:
runs-on: ${{ matrix.os }}
Expand Down Expand Up @@ -166,6 +167,7 @@ jobs:
needs: ['create-release']
env:
RUST_BACKTRACE: 1
RUSTFLAGS: -C target-feature=+avx2,+fma,+f16c
ASSET:
ASSET_SUM:
runs-on: ${{ matrix.os }}
Expand Down
39 changes: 19 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,16 @@
# Blue Candle

**Object detection service, portable, one file less than 20 MB in size**
![Blue Candle Logo](assets/blue-candle.png)

---

## TL;DR

- [Blue Iris](https://blueirissoftware.com/) compatible.
- Object detection service
- [Yolo8](https://github.com/ultralytics/ultralytics) inference implementation
- One small binary < 20 MB with one built in model.
- Dockerless
- Cuda support
- Simple to use
- Written in Rust
- Supports Windows
- Built-in small [Yolo8](https://github.com/ultralytics/ultralytics) model with support for other Yolo8 models.
- Yolo model implementation based on examples in [Candle](https://github.com/huggingface/candle), which implementation is based implementation in [Tinygrad](https://github.com/tinygrad/tinygrad).
- Uses [Axum](https://github.com/tokio-rs/axum) as the web framework.
- Utilizes Candle as the ML backend.
- [Blue Iris](https://blueirissoftware.com/) API compatible.

---

![Blue Candle Logo](assets/blue-candle.png)

---

Expand All @@ -28,7 +22,7 @@ Written in Rust, Blue Candle promises high performance and reliability. It uses

The Yolo model implementation in Blue Candle is based on examples found in Candle which in turn is based on Tinygrad implementation.

Our goal with Blue Candle is to provide an accessible, user-friendly, and efficient object detection solution that can seamlessly integrate with your existing Blue Iris setup.
Our goal with Blue Candle is to provide an accessible, user-friendly, and efficient object detection solution that can seamlessly integrate with your existing home automation setup.

---

Expand All @@ -40,10 +34,15 @@ Our goal with Blue Candle is to provide an accessible, user-friendly, and effici
- Visit the [Blue Candle releases page](https://github.com/xnorpx/blue-candle/releases/latest) to download the latest version. There are versions available for both CPU and CUDA (GPU).

2. **Check CUDA Compatibility:**
- If you are using the CUDA version, ensure your GPU is compatible. Blue Candle supports compute capabilities 6.1 or higher. You can check your GPU's compute capability on the [NVIDIA CUDA GPUs page](https://developer.nvidia.com/cuda-gpus).
- If you are using the CUDA version, ensure your GPU is compatible. Blue Candle supports compute capabilities 6.1 or higher. You can check your GPU's compute capability on the [NVIDIA CUDA GPUs page](https://developer.nvidia.com/cuda-gpus). Also ensure you
have [NVIDIA CUDA Toolkit](https://developer.nvidia.com/cuda-downloads) installed.

3. **Check CPU Compatibility:**
- The prebuilt release binaries is built for [AVX2](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions). The first intel processor with AVX2 was Haswell 2013. If your CPU does not support AVX2 then you need to build yourself or file a ticket so a
compatible binary can be released.

3. **Choose the Correct Release:**
- Download the appropriate version for your system (CPU or CUDA). For CUDA, select the release that matches your GPU's compute capability.
4. **Choose the Correct Release:**
- Download the appropriate version for your system (CPU or CUDA). For CUDA, select the release that matches your GPU's compute capability. If your target platform is not available pre-built please file a ticket.

## Getting Started

Expand Down Expand Up @@ -214,11 +213,11 @@ By using, modifying, or distributing any part of this project, you agree to comp
## Acknowledgments
- Web Framework: [Axum](https://github.com/tokio-rs/axum)
- Blue Iris Software: [Blue Iris](https://blueirissoftware.com/)
- ML Backend: [Candle](https://github.com/huggingface/candle)
- Yolo 8 Model: [Ultralytics](https://github.com/ultralytics/ultralytics)
- Yolo Model Inspiration: [Tinygrad](https://github.com/tinygrad/tinygrad)
- Blue Iris Software: [Blue Iris](https://blueirissoftware.com/)
- Web Framework: [Axum](https://github.com/tokio-rs/axum)
---
Expand Down

0 comments on commit 98239a2

Please sign in to comment.