Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rl tigerbeetle helm chart #403

Merged
merged 13 commits into from
Sep 13, 2022
3 changes: 2 additions & 1 deletion .prettierignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,5 @@ pnpm-lock.yaml
*.svg
Dockerfile
.gitignore
.prettierignore
.prettierignore
infrastructure/helm/**
23 changes: 23 additions & 0 deletions infrastructure/helm/tigerbeetle/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Autogenerated by helm init.

# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
24 changes: 24 additions & 0 deletions infrastructure/helm/tigerbeetle/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
apiVersion: v2
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

autogenerated

name: tigerbeetle
description: A Helm chart for Kubernetes

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.0.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: '0.0.0'
18 changes: 18 additions & 0 deletions infrastructure/helm/tigerbeetle/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
notes on tigerbeetle pod allocation:

The node pool for the tigerbeetle pods is created by terraform. nodes must _already exist_
before applying the helm chart, because if the persistent volumes get scheduled on a single
node, the application pods will continue to be scheduled on that node even if other nodes
become available.

The persistent [disk types](https://cloud.google.com/compute/docs/disks/performance#regional-persistent-disks)
constrain I/O performance. There are several choices to make:

Zonal persistent disks vs regional persistent disks:

The [persistent volumes allocated in k8s in GKE](https://kubernetes.io/docs/concepts/storage/storage-classes/#gce-pd)
are always AFAICT [networked](https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes), not local
to the node. The downside of this is the access times; the upside is that they can be [regional](https://cloud.google.com/compute/docs/disks/#repds)
and mirrored across two zones. I think that our best option is to use regional SSD PVs, but there's a [chart](https://cloud.google.com/compute/docs/disks/performance#zonal-persistent-disks)
comparing the performance. Weirdly, google's docs [elsewhere](https://cloud.google.com/compute/docs/disks) state that write
performance is less-good on regional disks, but that isn't reflected in the published performance chart.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still need to add the storage class resource and set the persistentvolumeclaim templates to use it.

62 changes: 62 additions & 0 deletions infrastructure/helm/tigerbeetle/templates/_helpers.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
{{/*
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Autogenerated

Expand the name of the chart.
*/}}
{{- define "tigerbeetle.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "tigerbeetle.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "tigerbeetle.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "tigerbeetle.labels" -}}
helm.sh/chart: {{ include "tigerbeetle.chart" . }}
{{ include "tigerbeetle.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "tigerbeetle.selectorLabels" -}}
app.kubernetes.io/name: {{ include "tigerbeetle.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{/*
Create the name of the service account to use
*/}}
{{- define "tigerbeetle.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "tigerbeetle.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
31 changes: 31 additions & 0 deletions infrastructure/helm/tigerbeetle/templates/configmap.nginx.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Chart.Name }}-nginx-proxy
rluckom-coil marked this conversation as resolved.
Show resolved Hide resolved
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "tigerbeetle.labels" . | nindent 4 }}
data:
# property-like keys; each key maps to a simple value
nginx-prestart.sh: |
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dynamically build the nginx config because the DNS server address is only known at runtime. and it is required for the resolve directive for the backends.

#!/bin/sh
set -ex;
DNS_SERVER=$(cat /etc/resolv.conf |grep -i '^nameserver'|head -n1|cut -d ' ' -f2)
echo $DNS_SERVER
config=$(cat << EOF
events {}
stream {

{{- range ( untilStep 0 ( .Values.statefulset.replicas | int) 1 ) }}
server {
listen {{ ( add 3000 . ) }};
resolver $DNS_SERVER;
set \$backend {{ $.Chart.Name }}-{{ . }}.{{ $.Chart.Name }}.{{ $.Release.Namespace }}.svc.cluster.local:4242;
proxy_pass \$backend;
}
{{- end }}
}
EOF
)
echo $config
echo $config > /etc/nginx/nginx.conf
15 changes: 15 additions & 0 deletions infrastructure/helm/tigerbeetle/templates/service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "tigerbeetle.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "tigerbeetle.labels" . | nindent 4 }}
spec:
clusterIP: None
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This creates a headless service, which means that the service itself has no hostname/IP (but the pods each have hostnames and IPs). This is appropriate since there isn't a use-case for a single service IP that is load-balanced to the individual pods.

ports:
- port: {{ .Values.service.port }}
targetPort: 4242
protocol: TCP
name: http
selector: {{- include "tigerbeetle.selectorLabels" . | nindent 4 }}
13 changes: 13 additions & 0 deletions infrastructure/helm/tigerbeetle/templates/serviceaccount.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{{- if .Values.serviceAccount.create -}}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Autogenerated. Worth thinking about what kinds of permission / network restrictions are going to be necessary (there aren't any in this PR yet but if there are specific things that we want I can add them).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rluckom-coil do you think it's worth adding a role and role binding? I don't think tigerbeetle needs to access any of the k8s resources yet.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a good question. Overall I think it might be best not to. I had initially added a storage class to this PR to set up the PVCs in the way that seemed best. But then I remembered that in our cluster we pretty much have a single "database-y" storage class that it's nice to be able to apply to whatever DBs we end up deploying. And I think it might be that way for a role for this serviceaccount as well--in the cases where it is needed (and I agree with you that I'm not sure what those are yet) I think they might turn out to be environment-specific enough that people deploying this chart would rather supply them themselves.

For another perspective, the cockroach chart's role gives permission to view and create secrets that cockroach uses for client certificates. I think that once tigerbeetle has an auth mechanism (or secure communication between nodes) something like that might become appropriate.

But if you'd rather have them added now for expansion later I can definitely do that.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, let's get this in and take a look at that later.

apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "tigerbeetle.serviceAccountName" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
rluckom-coil marked this conversation as resolved.
Show resolved Hide resolved
{{- include "tigerbeetle.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
Loading