-
Notifications
You must be signed in to change notification settings - Fork 0
Cooby Cloud Hetzner Install
In this document, we'll describe the setup of Kubernetes on Hetzner Cloud. The final cluster will have:
- Basic Kubernetes clusters up and running
- OpenEBS as dynamic storage provider
- Helm installed (with RBAC)
- nginx-ingress-controller
- CrunchyData Postgres /w replicas on OpenEBS
- Rancher 2.x Hetzner cluster management
- Loadbalancing and DNS Setup
- Cert-Manager and Let's Encrypt functionality
In a shell from a local Linux terminal run:
$ wget https://github.com/xetys/hetzner-kube/releases/download/0.3.1/hetzner-kube-linux-amd64
$ chmod a+x ./hetzner-kube-linux-amd64
$ sudo mv ./hetzner-kube-linux-amd64 /usr/local/bin/hetzner-kube
In Hetzner Cloud Console create a new project “cooby-kube” and add an API token “cooby-kube”. Copy the token and run:
$ hetzner-kube context add cooby-kube
Token: <PASTE TOKEN HERE>
And finally, add your SSH key (assuming you already have one in ~/.ssh/id_rsa.pub
) using:
$ hetzner-kube ssh-key add --name cooby-kube
And we are ready to go!
We'll create a special file called cloud-config
that gets executed while creating the cluster. This file removes iscsi-initiator from the nodes as they are created. We will apt-get install open-iscsi
again on each node later on. This eliminates the iscsi-initiator conflict between Ubuntu, Kubernetes and OpenESB.
#cloud-config
runcmd:
- [ service, iscsid, stop ]
- [ apt, remove,-y, open-iscsi ]
Deploying a cluster is as easy as:
$ hetzner-kube cluster create --name cooby-kube --ssh-key cooby-kube --worker-server-type cx21 --nodes 3 --cloud-init cloud-config
This will create 3 servers, 2 (workers) of type CX21 and 1 (master) of type CX11 in your account.
In order to do anything with the cluster, you’ll need to ssh
to the master to get access to the kubectl
command or setup your local machine to use kubectl with cluster kubeconfig
. To use kubectl
on you local machine first install kubectl:
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo touch /etc/apt/sources.list.d/kubernetes.list
$ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update
$ sudo apt-get install -y kubectl
Then issue the command:
$ hetzner-kube cluster kubeconfig --name cooby-kube
Note: Once the the cluster has been created and kubectl
is functional, run kubectl get nodes
and get the IP addresses of all nodes. Login to each node via ssh and run apt-get install open-iscsi
. This will ensure the proper iscsi-initiator is setup on all nodes. You should be able to ssh root@<nodeIP>
. If not, you may need to ssh-copy-id -i ~/.ssh/id_rsa.pub root@<nodeIP>
.
OpenEBS is a container native storage provider, which supports dynamic storage provisioning, which allows creating persistent volume claims to be automatically bound by created persistent volumes. In plain English it means containerized storage like a containerized application. i.e. No more separate and expensive SAN/NAS devices. Resilient, local storage provided by the nodes themselves. On Hetzner Cloud, the installation is straight-forward:
$ kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/openebs-operator.yaml
$ kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/openebs-storageclasses.yaml
You can check the status using kubectl get pod
and watch maya, and the operator starts running.
First we define a PersistentVolumeClaim
, Deployment
and Service
in a file nginx.yaml
:
echo "apiVersion: v1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx
spec:
storageClassName: openebs-standalone
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100m
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
persistentVolumeClaim:
claimName: nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
selector:
app: nginx
type: NodePort
" | kubectl apply -f -
Get the exact pod name of one nginx pod from kubectl get pod
and run:
$ kubectl exec -it <pod-name> -- bash
root@pod:/# echo "hello world" > /usr/share/nginx/html/index.html
root@pod:/# exit
Now you can kill the pods by:
$ kubectl delete pod -l app=nginx
And wait until they are re-scheduled again. Because of the persistent volume mounted in /usr/share/nginx/html
the data is available, even when pods are killed.
As Hetzner does not have a cloud provider for load balancers, we will use nginx-ingress-controller for traffic routing.
First, we install a helm, respecting RBAC:
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
$ echo "apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
" | kubectl apply -f -
$ helm init --service-account tiller
And now we can install a lot of helm charts. Like this:
helm install --name ingress --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP stable/nginx-ingress
This will install an ingress controller.