-
Notifications
You must be signed in to change notification settings - Fork 33
7. Adding A New Service
First of all, huge kudos for considering adding a new service to the Harbor! 🎉
To add a new service, you'll need to consider a few things:
See the guide on dynamic multi-file and cross-file configurations.
The service handle is a unique identifier for the service. It should be unique, represent the service well and be easy(ish) to type.
This handle has to be used in:
- compose file names
- compose service names
- container names
- folder names
- env variables
- config files
# Good
sop
postner
litellm
webui
# Meh
lm-evaluation-harness
service-51
service_1
Create a new file with the service handle in the name. If service will need to interact with other existing services, use the cross-file rule. Also consider platform-specific files, if relevant.
# Main service definitions
compose.service.yml
# Extension for CUDA toolkit
# and access to the GPU
compose.service.nvidia.yml
# Cross-service file, will be included
# if both "service" and "otherservice" are running
compose.x.service.otherservice.yml
- Service name should be the service handle
- Container name should be
${HARBOR_CONTAINER_PREFIX}.<service handle>
- Main
.env
file should be connected - Ensure to only include what's necessary for the service to run standalone to go into the main service file
- Anything outlining a dependency should go into cross-service files
- See Harbor's
.env
for the shared things like folder locations, tokens, etc. that might be needed
You'll find plenty of examples in the repo, but here's a simple one:
services:
service:
# It's important to keep Harbor's container prefix
container_name: ${HARBOR_CONTAINER_PREFIX}.service
# Prefer to make version customizable
image: repo/image:${HARBOR_SERVICE_VERSION}
ports:
# When multiple ports are exposed, the one with lowest
# in-container port will be the "main" one
- ${HARBOR_SERVICE_HOST_PORT}:80
volumes:
# Service local workspace
- ./service/data:/app/data
# Global service cache/config folder should be
# set via a variable
- ${HARBOR_SERVICE_CACHE_CACHE}:/app/cache
env_file:
# Always add the main .env
- ./.env
# If service supports it, add an override too
- ./service/override.env
Keep in mind that service might want to persist some files, or be integrated with global config or global caches. Reflect on that in the main service definition.
If persistence is needed, ensure to use the same naming as a service handle.
harbor/
otherservice/
service/
Please, add any persistent data to the .gitignore
.
Then, we're ready to move onto providing the configuration necessary for the service. It might come via:
See the guides on config merging and config interpolation
Using the same folder as for the persisted data, specify the "main" configuration file and any "cross-service" configuration files.
service/
configs/
# Base config, added whenever the service is running
config.yml
# Added whenever the service is running alongside the "tts" service
config.tts.yml
``` # Added whenever the service is running alongside the "litellm" and "langfuse" services
config.x.litellm.langfuse.yml
In order to setup the config merging:
- Create necessary compose files replicating the structure of the config files
- Replace service entrypoint with a custom one
- Mount shared utils with the config merging logic
- In the custom entrypoint - merge configs together and store as a unified config file for the service to consume (preferrably at service's default location)
You can pass environment variables in multiple ways:
services:
service:
env_file: ./.env
environment:
- ENV_VAR=${HARBOR_ENV_VAR}
Useful when you want to ensure that in the end service will receive a specific value, or when the env variable name is overly abstract.
# .env
SERVICE_CUSTOM_VARIABLE="value"
Should be reserved for cases where a given variable is persistent, global and generally doesn't change.
# service/configs/config.json
{
"key": "${HARBOR_ENV_VAR}"
}
Note
It doesn't work for any mounted files, only when using the config merging logic.
An alternative to in-compose env vars, also for cases when service doesn't support env vars for a given configuration natively.
Typical pattern for the Harbor is to have service args
CLI to set the "extra" arguments to be passed to the service. It can be added in a few steps:
# .env
HARBOR_SERVICE_EXTRA_ARGS=""
It automatically becomes manageable via the harbor
env manager:
harbor config set service.extra_args "--some-arg 123"
Add this to the service.compose.yml
:
services:
service:
command: >
${HARBOR_SERVICE_EXTRA_ARGS}
# Other service command
If the service has its own sub-cli, i.e. harbor service
, ensure to also add the shortcuts for arg management there.
You can use two helpers for quickly mapping sub-cli sections to the configuration:
-
env_manager_alias
- for string values (get/set syntax depending on the value presence + help) -
env_manager_arr
- for array value (ls/rm/add syntax + help)
Example:
# Service sub-cli should be a dedicated bash function
# with the "run_<service>_command" name
run_service_command() {
case "$1" in
version)
shift
env_manager_alias service.version "$@"
;;
tags)
shift
env_manager_arr service.tags --on-set update_main_key "$@"
;;
*)
echo "Usage: harbor service {version|tags}"
# When nothing matched at a service level
# return the special exit code to indicate
# that this command could be retried with a
# different order of arguments
return $scramble_exit_code
;;
esac
}
main_entrypoint() {
case "$1" in
# Add the service to the main CLI router
# you can use aliases where semantically correct
newservice)
shift
run_service_command "$@"
;;
# Other cases...
esac
}
Harbor App has a mini-registry of services with additional tags for filtering. The newly added service needs to be present there to align the presentation with already existing services.
export const serviceMetadata = {
// ...
newservice: {
tags: [HST.satellite, HST.api],
}
// ...
}
The metadata must have at least one primary tag (frontend
, backend
, satellite
). Presence of the cli
tag will block the "launch" functionality in the UI.
# Start the service to debug
harbor up service
# See the logs
harbor logs service
# Check if the port mapping is correct
harbor open service
harbor url service
harbor url -i service
harbor url -a service
harbor qr service
# Inspect/debug the container if needed
# 1. Start the shell in service container
harbor shell service
# 2. Run arbirary commands in running service container
harbor exec service <...>
# 3. Run one-off container with service image
harbor run service <...>
Add some information about the service to:
show_help
-
harbor.prompt
files across the repo - Wiki - Services, with examples and instructions
- Service Roaster section in the README