Mono repo to manage the k3s cluster on my homelab server. This repo now mostly holds gitops config for k3s. See nixos for an example of preparing host system for k3s.
Some of the features of K3s cluster:
- ArgoCD gitops
- Secrets in Vault with
external-secrets
integration - Ingress-nginx with cert-manager and LetsEncrypt
- Auth either with OAuth proxy or hiding behind
tailscale-k8s-operator
- Selected apps connected only via wireguard gateway
- Databases: postgresql, redis, mongodb
- Paperless document archival
- Plex media server and samba
- My own Interactive Brokers trading bot
- Nocodb deployment for above trading app
- age
- ansible
- go-task
- terraform
- direnv
- pre-commit
- Add your variables to
.config.env
.
❗ You can add an extra level of security and refer to your profile local variables, or pull secrets from local password-store. Git hooks should prevent you from committing your secrets.
- Install pre-commit hooks
task pre-commit:init
-
Add a wireguard config file from your provider into
.bootstrap-secrets
folder and name itwireguard.conf
. -
Cluster bootstrap.
Run task cluster:install
to bootstrap the cluster. If it fails due to "vault-0 not having assigned host", wait for pod to be up and execute same task again. You can also follow the individual steps below.
Install zfs-localpv:
task cluster:zfspv
(Optional) Check on a host system that dataset under main zfs pool is created:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool 1.79T 3.48T 1.79T /pool
pool/k3s 568K 3.48T 96K /pool/k3s
Install and initialize vault using GCP KMS. TODO: automate using Terraform
task cluster:vault:install
task cluster:vault:init
Now the vault should be unsealed and initialized, which you can check with:
k exec -it -n vault vault-0 -- vault status
Key Value
--- -----
Recovery Seal Type shamir
Initialized true
Sealed false
Total Recovery Shares 5
Threshold 3
Version 1.9.0
Storage Type file
Cluster Name vault-cluster-21f860b5
Cluster ID 275d5a0b-5493-8f63-ad90-68bd72c3e02c
HA Enabled false
Inject the secrets into vault:
./configure.sh --vault
You can verify that secrets are injected:
k exec -n vault vault-0 -- vault kv get kv/secret/oauth2
======= Metadata =======
Key Value
--- -----
created_time 2021-12-22T20:17:12.538085046Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 7
=============== Data ===============
Key Value
--- -----
VAULT_OAUTH2_CLIENT_ID *****
VAULT_OAUTH2_CLIENT_SECRET *****
VAULT_OAUTH2_COOKIE_SECRET *****
VAULT_OAUTH2_EMAIL_WHITELIST *****
name my-secret
❗ we use oauth2-proxy with email authentication. Comma-separated email whitelist provided via
VAULT_OAUTH2_EMAIL_WHITELIST
can pushed into Vault with./configure.sh --vault
.
Install external-secrets CRD
task cluster:secrets:install
Install argocd
task cluster:argo:install
ArgoCD will complete the cluster provisioning. After the load balancer is provisioned you should be able to access argocd UI at argo.${your_domain}
. Alternatively you can connect with CLI with port forwarding, e.g.:
k port-forward -n argocd svc/argocd-server 8080:443
argocd login localhost:8080 --insecure --username admin --password $HOMELAB_ARGOCD_PASSWORD
After provisioning ArgoCD will also assume control over its own installation and other applications in init
folder, which we installed manually in previous steps.
Steps above set up the reverse proxy with authentication and certificates. However they expose the server IP. Cloudflare will fix this.
❗ enable "development" mode in Cloudflare as you add new ingresses to get Let's Encrypt certificate
./configure.sh --verify
cd provision/terraform/cloudflare
terraform plan
terraform apply
This all runs on single machine in acclaimed Node 304 case, which can house 6 HDDs, although I use only 4 at the moment.
I am considering upgrading to multi-node deployment for "fun" part of it, but the current form-factor meets all needs and is quiet, functional and aesthetic enough to sit in plain sight in the Living room.
In Feb 2023 I upgraded old Intel Celeron to i5-11400 CPU. The average load is about 15% now.