Releases: av/harbor
v0.2.16 - 4 new services, bugfixes, Discord
Overview
This release comes a few new services, as well as some additional helper features.
Open WebUI Pipelines
Integration authored by @ic4l4s9c 🎉
pipelines
bring modular, customizable workflows to any UI client supporting OpenAI API specs.
harbor up pipelines
Qdrant
Also authored by @ic4l4s9c 🎉
First shared vector store for Harbor - will be used by supported satellites in the future.
harbor up qdrant
REST API: http://localhost:34221
Web UI: http://localhost:34221/dashboard
GRPC API: http://localhost:34222
Chat Nio
New promising WebUI for LLMs. Harbor pre-configures it for ollama
and searxng
out of the box.
# Start the service
harbor up chatnio
# [Optional] Open the UI
harbor open chatnio
K6
Load testing toolkit. Harbor's version comes with a lot of things pre-configured testing of OpenAI-compatible LLM APIs, including a custom grafana dashboard (with token stats), api client helpers and sample scripts.
Discord
We have a Discord now. Come say "Hi 👋🏻" if you want to chat to someone about Harbor.
Misc
webui
- fixing missingoverride.env
after v0.2.15harbor open
- now supports overrides via<service>.open_url
config
New Contributors
Full Changelog: v0.2.15...v0.2.16
v0.2.15 - CLI fixes and improvements
harbor env
You can now set service-specific env vars directly via Harbor CLI
# Show the current environment variables for the "n8n" service
harbor env n8n
# Get a specific environment variable
# for the dify service (LOG_LEVEL under the hood)
harbor env dify log.level
# Set a brand new environment variable for the service
# All three are equivalent
harbor env cmdh NODE_ENV development
Misc
cmdh
- fixing service build and initn8n
- usingN8N_SECURE_COOKIE
by default, since Harbor is expected to run onlocalhost
comfyui
- disable web auth by default to provide seamless integration withwebui
out of the box- documentation fixes and improvements
v0.2.14
- Relaxing Python version requirement for PyPI installs
- Attempt to explicitly resolve home folder permissions for Harbor App
Full Changelog: v0.2.13...v0.2.14
v0.2.13 - n8n
v0.2.13 - n8n integration
n8n
is a low-code workflow automation tool that has good support for LLMs.
# Start the service
harbor up n8n
# [Optional] open in the browser
harbor open n8n
Harbor's integration is mostly around allowing to run and configure n8n
as the other Harbor services, as pre-configuring the connections/workflows dynamically is not something that is currently supported.
Please find detailed instructions in the service docs
Misc
- Broken (for now) integration with the Bolt.new, awaiting for the fixes in the upstream
Full Changelog: v0.2.12...v0.2.13
v0.2.12 - CLI improvements and bugfixes
The most notable feature in this release are Aliases - allowing you to configure shortcuts for the harbor run
to execute.
# Configure an alias - can be arbitrary shell command
harbor alias set echo 'echo "I like $APP!"'
# Run with "harbor run"
APP=Harbor harbor run echo
"I like Harbor!"
# Reference
alias|aliases|a [ls|get|set|rm] - Manage Harbor aliases"
echo " alias ls|list - List all aliases"
echo " alias get <name> - Get an alias"
echo " alias set <name> <command> - Set an alias"
echo " alias rm|remove <name> - Remove an alias"
See more detailed documentation in the CLI reference.
Misc
harbor up
- Now supports additional modifiers (logs, open, no defaults)harbor stats
- shows stats stream for Harbor servicesharbor down
- now correctly matches sub-servicesharbor shell
- allows to choose shell out of the box:harbor shell <service> ash
(note that the service must bundle given shell)harbor config
- correctly handle escape for dict values- minor refactoring of internal command structure
harbor doctor
- output specific location of Harbor home for the debug- Harbor App - handle case when a selected profile is deleted outside of the app
- User Guide improvements
- CLI Reference expansion
Full Changelog: v0.2.11...v0.2.12
v0.2.11
v0.2.11 - NPM, PyPI, Windows app
very experimental features:
- Installing Harbor via
npm
,pipx
- See more in the revised install instructions
- Enabling Windows Harbor App builds (likely won't be functional)
Misc
- Switching to Tauri v2.0.0 release version
- Fix being unable to run multiple Ollama CLIs at the same time
fixfs
now accounts forplandex
Full Changelog: v0.2.7...v0.2.11
v0.2.7
v0.2.7
boost
- support per-request parameterization- small improvements to
llm
andchat
APIs - a series of experimental custom modules
- small improvements to
bench
short
judge prompt type- fix tasks report generation to correctly display used prompt
- allow specifying judge
max_tokens
via config and CLI
Full Changelog: v0.2.6...v0.2.7
v0.2.6 - Search, Repopack
v0.2.6
Harbor App now allows searching for services as well as configuration items.
We now also have a friend.
simplescreenrecorder-2024-10-05_00.30.29.mp4
Repopack integration
Repopack is a tool that packs your entire repository into a single, AI-friendly file.
Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools like Claude, ChatGPT, and Gemini (feeding Harbor to Gemini 1.5 Pro costs ~$1).
cd ./my/repo
harbor repopack -o my.repo --style xml
See fabric and other CLI satellites for some interesting use-cases.
Full Changelog: v0.2.5...v0.2.6
v0.2.5
v0.2.5 - Theme freedom
Harbor is all about customization. I decided that the the app theme should be fully representative of this, plus my designer colleagues told me it's a very bad idea, so enjoy - full customization of the Harbor App theme.
Misc
- Integrated Nexa - unfortunately there's a big bug in the Nexa Server that makes streaming unusable, otherwise this integration is something I'm quite proud of, as Harbor delivers two key improvements upon official Nexa releases:
- Compatibility with Open WebUI (with streaming disabled, for now)
- CUDA support in Docker
- That said, only NLP models were tested, so Audio/Visual verification is warmly welcomed!
- Nexa Service docs
- Small improvements to the App UI: Loaders, gradient service cards based on the tags (very faint)
Full Changelog: v0.2.4...v0.2.5
v0.2.4 - AnythingLLM
AnythingLLM integration
A full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.
AnythingLLM divides your documents into objects called workspaces
. A Workspace functions a lot like a thread, but with the addition of containerization of your documents. Workspaces can share documents, but they do not talk to each other so you can keep your context for each workspace clean.
Starting
# Start the service
harbor up anythingllm
# Open the UI
harbor open anythingllm
Out of the box, connectivity:
- Ollama - You'll still need to select specific models for LLM and embeddings
- llama.cpp - Embeddings are not pre-configured
- SearXNG for Web RAG - still needs to be enabled for a specific Agent