A rust kubernetes reference controller for a Document
resource using kube-rs, with observability instrumentation.
The Controller
object reconciles Document
instances when changes to it are detected, writes to its .status object, creates associated events, and uses finalizers for guaranteed delete handling.
- A Kubernetes cluster / k3d instance
- The CRD
- Opentelemetry collector (optional)
As an example; get k3d
then:
k3d cluster create --registry-create --servers 1 --agents 1 main
k3d kubeconfig get --all > ~/.kube/k3d
export KUBECONFIG="$HOME/.kube/k3d"
A default k3d
setup is fastest for local dev due to its local registry.
Apply the CRD from cached file, or pipe it from crdgen
(best if changing it):
cargo run --bin crdgen | kubectl apply -f -
Setup an opentelemetry collector in your cluster. Tempo / opentelemetry-operator / grafana agent should all work out of the box. If your collector does not support grpc otlp you need to change the exporter in main.rs
.
If you don't have a collector, you can build locally without the telemetry
feature (tilt up telemetry
), or pull images without the otel
tag.
cargo run
or, with optional telemetry (change as per requirements):
OPENTELEMETRY_ENDPOINT_URL=https://0.0.0.0:55680 RUST_LOG=info,kube=trace,controller=debug cargo run --features=telemetry
Use either your locally built image or the one from dockerhub (using opentemetry features by default). Edit the deployment's image tag appropriately, and then:
kubectl apply -f yaml/deployment.yaml
kubectl wait --for=condition=available deploy/doc-controller --timeout=20s
kubectl port-forward service/doc-controller 8080:80
To build and deploy the image quickly, we recommend using tilt, via tilt up
instead.
NB: namespace is assumed to be default
. If you need a different namespace, you can replace default
with whatever you want in the yaml and set the namespace in your current-context to get all the commands here to work.
In either of the run scenarios, your app is listening on port 8080
, and it will observe Document
events.
Try some of:
kubectl apply -f yaml/exampleServiceAlerter.yaml
kubectl delete doc lorem
kubectl edit doc lorem # change hidden
The reconciler will run and write the status object on every change. You should see results in the logs of the pod, or on the .status object outputs of kubectl get doc -oyaml
.
The sample web server exposes some example metrics and debug information you can inspect with curl
.
$ kubectl apply -f yaml/exampleServiceAlerter.yaml
$ curl 0.0.0.0:8080/metrics
# HELP doc_controller_reconcile_duration_seconds The duration of reconcile to complete in seconds
# TYPE doc_controller_reconcile_duration_seconds histogram
doc_controller_reconcile_duration_seconds_bucket{le="0.01"} 1
doc_controller_reconcile_duration_seconds_bucket{le="0.1"} 1
doc_controller_reconcile_duration_seconds_bucket{le="0.25"} 1
doc_controller_reconcile_duration_seconds_bucket{le="0.5"} 1
doc_controller_reconcile_duration_seconds_bucket{le="1"} 1
doc_controller_reconcile_duration_seconds_bucket{le="5"} 1
doc_controller_reconcile_duration_seconds_bucket{le="15"} 1
doc_controller_reconcile_duration_seconds_bucket{le="60"} 1
doc_controller_reconcile_duration_seconds_bucket{le="+Inf"} 1
doc_controller_reconcile_duration_seconds_sum 0.013
doc_controller_reconcile_duration_seconds_count 1
# HELP doc_controller_reconciliation_errors_total reconciliation errors
# TYPE doc_controller_reconciliation_errors_total counter
doc_controller_reconciliation_errors_total 0
# HELP doc_controller_reconciliations_total reconciliations
# TYPE doc_controller_reconciliations_total counter
doc_controller_reconciliations_total 1
$ curl 0.0.0.0:8080/
{"last_event":"2019-07-17T22:31:37.591320068Z"}
The metrics will be auto-scraped if you have a standard PodMonitor
for prometheus.io/scrape
.
The example reconciler
only checks the .spec.hidden
bool. If it does, it updates the .status
object to reflect whether or not the instance is_hidden
. It also sends a kubernetes event associated with the controller. It is visible at the bottom of kubectl describe doc samuel
.
While this controller has no child objects configured, there is a configmapgen_controller
example in kube-rs.