v0.5.3
Pre-release
Pre-release
⚠️ Notice
- llama.cpp backend (CPU, Metal) now requires a redownload of gguf model due to upstream format changes: #645 ggerganov/llama.cpp#3252
- Due to indexing format changes, the
~/.tabby/index
needs to be manually removed before any further runs oftabby scheduler
. TABBY_REGISTRY
is replaced withTABBY_DOWNLOAD_HOST
for the github based registry implementation.
🚀 Features
- Improved dashboard UI.
🧰 Fixes and Improvements
- Cpu backend is switched to llama.cpp: #638
- add
server.completion_timeout
to control the code completion interface timeout: #637 - Cuda backend is switched to llama.cpp: #656
- Tokenizer implementation is switched to llama.cpp, so tabby no longer need to download additional tokenizer file: #683
💫 New Contributors
- @CrCs2O4 made their first contribution in #597
- @yusiwen made their first contribution in #620
- @gjedeer made their first contribution in #635
- @XpycT made their first contribution in #634
- @HKABIG made their first contribution in #662
Full Changelog: v0.4.0...v0.5.3