This is a collection of Pulumi scripts I use to automate repetitive deployments of applications and services to Kubernetes. I am still learning both Pulumi and Typescript so perhaps something could be more polished, but this code has been througly tested and is being actively maintained. This code assumes you have a Kubernetes cluster already provisioned, and Pulumi set up on your machine.
Note: when Pulumi config/secrets are required, the name of the configuration setting or secret must be prefixed with the name given to the app when instantiating the relevant class. For example, for a deployment of cert-manager named "cert-manager", the names of config/secrets must be prefixed with "cert-manager:".
Deploys cert-manager, which is the most popular solution to issue and manage TLS certificates with LetsEncrypt and more, on Kubernetes. The code assumes DNS01 challenges are configured, alongside HTTP01 challenges, using Cloudflare. You may have to adapt the code if you use another provider or want to make this option configurable.
Configuration can be passed either as arguments when initialising the relevant class, or as configuration and secrets stored in the Pulumi stack:
pulumi config set cert-manager:version v0.14.1
pulumi config set cert-manager:email <email address for the LetsEncrypt account>
pulumi config set cert-manager:cloudflareEmail <email address for the Cloudflare account>
pulumi config set --secret cert-manager:cloudflareAPIKey <your Cloudflare API key>
Minimal code required to deploy cert-manager using Pulumi stack configuration is as follows:
import { CertManager } from "../vendor/cert-manager/CertManager";
const certManager = new CertManager("cert-manager", {});
It installs the Nginx ingress controller.
To install as a DaemonSet using the host ports and a service of type ClusterIP:
import { NginxIngress } from "../vendor/nginx-ingress/NginxIngress";
const nginxIngress = new NginxIngress("nginx-ingress", {
namespace: "nginx-ingress",
ingressClass: "nginx",
});
NodePort:
const nginxIngress = new NginxIngress("nginx-ingress", {
namespace: "nginx-ingress",
serviceType: "NodePort",
ingressClass: "nginx",
nodePortHTTP: 30080,
nodePortHTTPS: 30443
});
LoadBalancer:
const nginxIngress = new NginxIngress("nginx-ingress", {
namespace: "nginx-ingress",
serviceType: "LoadBalancer",
ingressClass: "nginx",
});
This is to integrate Hetzner Cloud block storage in Kubernetes.
Config required:
pulumi config set hetzner-cloud-csi:version 1.2.3
pulumi config set --secret hetzner-cloud-csi:token <your HC project token>
To install:
import { HetznerCloudCSI } from "../vendor/hetzner-cloud-csi/HetznerCloudCSI";
const hetznerCloudCSI = new HetznerCloudCSI("hetzner-cloud-csi", {});
Zalando Postgres Operator allows creating Postgres clusters very easily and quickly with replication, failover, WAL archivation to S3 and logical backups also to S3. I wrote a blog post about it.
Configuration:
pulumi config set zalando-postgres-operator:s3Region ...
pulumi config set zalando-postgres-operator:s3Endpoint https://...
pulumi config set zalando-postgres-operator:s3Bucket ...
pulumi config set --secret zalando-postgres-operator:s3AccessKeyId ...
pulumi config set --secret zalando-postgres-operator:s3SecretAccessKey ...
Installation
import { ZalandoPostgresOperator } from "../vendor/zalando-postgres-operator/ZalandoPostgresOperator";
const zalandoPostgresOperator = new ZalandoPostgresOperator("zalando-postgres-operator", {
namespace: "postgres-operator",
}, { dependsOn: [hetznerCloudCSI] });
Configuration:
pulumi config set postgres-cluster:s3Region ...
pulumi config set postgres-cluster:s3Endpoint https://...
pulumi config set postgres-cluster:s3Bucket ...
pulumi config set --secret postgres-cluster:s3AccessKeyId ...
pulumi config set --secret postgres-cluster:s3SecretAccessKey ...
import { ZalandoPostgresCluster } from "../vendor/zalando-postgres-operator/ZalandoPostgresCluster";
const postgresCluster = new ZalandoPostgresCluster("postgres-cluster", {
namespace: "postgres-cluster",
teamId: "postgres",
storageClass: "hcloud-volumes",
storageSize: "10Gi",
numberOfInstances: 3,
enableLogicalBackups: true,
enableWalBackups: true,
}, { dependsOn: [hetznerCloudCSI, zalandoPostgresOperator] });
import { ZalandoPostgresCluster } from "../vendor/zalando-postgres-operator/ZalandoPostgresCluster";
const postgresCluster = new ZalandoPostgresCluster("postgres-cluster", {
namespace: "postgres-cluster",
teamId: "postgres",
storageClass: "hcloud-volumes",
storageSize: "10Gi",
numberOfInstances: 3,
enableLogicalBackups: true,
enableWalBackups: true,
clone: true,
cloneClusterName: "original-postgres-cluster",
cloneClusterID: "295bf786-adaa-4864-bd35-2982ef2532bc",
cloneTargetTime: "2050-02-04T12:49:03+00:00"
}, { dependsOn: [hetznerCloudCSI, zalandoPostgresOperator] });
cloneClusterName is the name used by the original cluster that you want to clone or restore. cloneClusterID is the ID of the original cluster - if you don't have access to the postgresql resource for the cluster anymore, you will find it in the directory on S3 where the backups are stored. cloneTargetTime is optional and allows you to do point-in-time recovery by restoring the data at a specific time. If omitted, the most recent backup will be restored.
PgAdmin is a handy UI for Postgres that can run directly in Kubernetes.
Configuration:
pulumi config set pgadmin:email <login email>
pulumi config set --secret pgadmin:password <login password>
Installation:
import { PgAdmin } from "../vendor/pgadmin/PgAdmin";
const pgAdmin = new PgAdmin("pgadmin", {
namespace: "pgadmin",
persistenceEnabled: true
}, { dependsOn: [hetznerCloudCSI] });
Note that in my case it depends on Hetzner Cloud CSI since I enable persistence.
Velero is the most popular backup solution for Kubernetes. The code assumes S3 compatible storage is used for the backups.
Configuration:
pulumi config set velero:s3Bucket ...
pulumi config set velero:s3Region ...
pulumi config set velero:s3Url https://...
pulumi config set --secret velero:awsAccessKeyId ...
pulumi config set --secret velero:awsSecretAccessKey ...
Installation:
import { Velero } from "../vendor/velero/Velero";
const velero = new Velero("velero", {});
Harbor is a very popular and feature rich container registry.
Configuration:
pulumi config set --secret harbor:adminPassword ...
pulumi config set --secret harbor:secretKey ...
Installation:
import { Harbor } from "../vendor/harbor/Harbor";
const harbor = new Harbor("harbor", {}, { dependsOn: [hetznerCloudCSI] });
Deploys Redis either in standalone mode or in clustered mode.
import { Redis } from "../vendor/redis/Redis";
# standalone
const redisStandalone = new Redis("redis", {
namespace: "redis",
persistenceStorageClass: "hcloud-volumes"
}, { dependsOn: hetznerCloudCSI });
# clustered
const redis = new Redis("redis", {
namespace: "redis",
persistenceStorageClass: "hcloud-volumes",
clusterEnabled: true,
slaveCount: 2,
sentinelEnabled: true
}, { dependsOn: hetznerCloudCSI });
This installs the popular distributed cache.
import { Memcached } from "../vendor/memcached/Memcached";
const memcached = new Memcached("memcached", {
replicaCount: 3,
memory: 2048
});
AnyCable is an alternative implementation of part of ActionCable - the native websockets solution for Ruby on Rails apps - that I am currently using. To install:
import { AnyCable } from "../vendor/anycable/AnyCable";
const anyCable = new AnyCable("anycable-go", {
hostname: "<web socket domain>",
redisChannel: "__anycable__",
rpcHost: "<URL of the Rails RPC service>",
namespace: "anycable-go",
logLevel: "debug",
imageTag: "1.0.0.preview1",
ingressClass: "<optional ingress class, default to 'nginx'>",
})
Installs a controller to automaically assign floating IPs in Hetzner Cloud to a healthy node. Note that each node must have a network interface configured with the floating IPs for this to work.
Configuration:
pulumi config set --secret hetzner-cloud-fip-controller:apiToken <Hetzner Cloud token>
Installation:
const hetznerCloudFIPController = new HetznerCloudFIPController("hetzner-cloud-fip-controller", {
addresses: [
"<floating IP 1>",
"<floating IP 2>",
"<floating IP N>",
]
});
Installs MetalLB, which allows creating services of type LoadBalancer on prem or when the provider doesn't offer load balancers that can be provisioned with Kubernetes. It expects a group of IPs or IP ranges. In Hetzner Cloud I am using this with floating IPs.
Configuration:
pulumi config set --secret metallb:secretKey "$(openssl rand -base64 128)"
Installation:
const metalLB = new MetalLB("metallb", {
addresses: [
"<IP or IP range 1>",
"<IP or IP range 2>",
"<IP or IP range N>",
]
});