-
Notifications
You must be signed in to change notification settings - Fork 0
Cooby Cloud General Install
Kubernetes has a myriad of options available with a lot of options trying to lock you in. We’ll cover a few of them briefly here and explain why we didn’t choose them. Minikube is the best way to get started with a local Kubernetes installation. Make sure you have a minimum of three nodes setup for these instructions to work.
This guide will take you from bare metal servers to a 3 node Kubernetes cluster, a custom PVC/PV storage class using OpenEBS, Crunchy-Postgres /w replicas and Helm. Cooby Extras is where new configurations like Cert-Manager, MayaOnline, Private Docker Registry, Gitlab CI/CD integration, Odoo Operator, Dockery-Odoo, and Rancher 2.x will be placed. Helpful tips and bugs I ran into are sprinkled throughout their respective sections. Loadbalancing will be addressed in the Hetzner installation document.
- First-class SSL support with LetsEncrypt so we can easily deploy new apps with SSL using just annotations.
- Bare metal for this build means a regular VM/VPS provider or a regular private provider like Hetzner with no special services - or actual hardware.
- No fancy requirements (like BIOS control) and completely vendor agnostic.
- Be reasonably priced (<$75/month).
- Be reasonably production-like (this is for POC/testing projects, not a huge business critical app). Production-like for this case means a single master with backups being taken of the node(s).
- Works with Ubuntu 16.04.
- Works on Vultr (and others like Digital Ocean - providers that are (mostly) generic VM hosts and don’t have specialized APIs and services like AWS/GCE.
- We also recommend making sure the VM provider supports a software defined firewall and a private network - however this is not a hard requirement. For example, Hetzner does not provide a private network but offers a VLAN service via vSwitch for free.
- OpenShift: Owned by RedHat - uses its own special tooling around oc. Minimum requirements were too high for a small cluster. Pretty high vendor lock-in.
- KubeSpray: unstable. It used to work pretty consistently around 1.6 but when trying to spin up a 1.9 cluster and 1.10 cluster it was unable to finish. We're fans of Ansible, and if you are too, this is the project to follow I think.
- GKE: Attempting to stay away from cloud-like providers so outside of the scope of this. If you want a managed offering and are okay with GKE pricing, choose this option.
- AWS: Staying away from cloud-like providers. Cost is also a big factor here since this is a POC-project cluster.
- Tectonic: Requirements are too much for a small cloud provider/installation ( PXE boot setup, Matchbox, F5 LB ).
- NetApp Trident: Licensing is required for many elements and it basically expects a NetApp storage cluster to be running.
- Kops: Only supports AWS and GCE.
- Canonical Juju: Requires MAAS, attempted to use but kept getting errors around lxc. Seems to favor cloud provider deployments on (AWS/GCE/Azure).
- Kubicorn: No bare metal support, needs cloud provider APIs to work.
- Rancher: Rancher is pretty awesome, unfortunately it’s incredibly easy to break the cluster and break things inside Rancher that make the cluster unstable. It does provide a very simple way to play with Kubernetes on whatever platform you want. It currently only visualizes apps/vbetav1 not apps/v1 (they are working on this), however, we're using it anyway. The instructions are in the Cooby Extras document.
At the time of this writing, … the winner is… Kubeadm / OpenEBS. Kubeadm is not in any incubator stages and is documented as one of the official ways to get a cluster setup. OpenEBS is totally open source and has no restrictions on use. They've won numerous prestigious awards in the containerized storage space. Their commercial offering is called MayaData.
- 1 Master node ( 1 CPU / 2G RAM / 10G HD )
- 2 Worker nodes ( 2 CPU / 4G RAM / 20G HD )
Total cost: $45.00-$50.00 / mo. (from Vultr.com)
Note: You must have at least three servers for this installation to work correctly.
All internal server firewalls must be disabled.
$ sudo ufw disable
If there is a software firewall like AWS open all inbound TCP/UDP traffic to 172.0.0.0/8 and 10.0.0.0/8 (or however the internal networks are configured. There will always be 2 internal networks - one for the cluster servers and one for the Kubernetes PODS to communicate with each other.)
#!/bin/sh
#
# Run on each server as root
#
apt-get update
apt-get upgrade -y
apt-get -y install python
IP_ADDR=$(echo 10.99.0.$(ip route get 8.8.8.8 | awk '{print $NF; exit}' | cut -d. -f4))
cat <<- EOF >> /etc/network/interfaces
auto ens7
iface ens7 inet static
address $IP_ADDR
netmask 255.255.0.0
mtu 1450
EOF
ifup ens7
apt-get install -y apt-transport-https
apt -y install docker.io
systemctl start docker
systemctl enable docker
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >/etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
reboot
Lines 2-13 will run on server boot up, install python (used so Ansible can connect and do things later), update and upgrade everything, and then add the private network address. Since Vultr gives you a true private network we're cheating a bit and just using the last octect of the public IP to define the internal LAN IP.
Line 16 we’re installing the Ubuntu packaged version of docker – this is important. There are a lot of tools that don’t bundle the proper docker version to go along with their k8s installation and that can cause all kinds of issues, including everything not working due to version mismatches.
Lines 15-22 we’re installing the Kubernetes repo tools for kubeadm
and Kubernetes itself.
Execute init-script.sh
on each node (master, worker01, worker02):
$ chmod a+x init-script.sh
$ sudo ./init-script.sh
On the master node run kubeadm
to init the cluster and start Kubernetes services:
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
This will start the cluster and setup a pod network on 10.244.0.0/16 for internal pods to use.
Next you’ll notice that the node is in a NotReady
state when you do a kubectl get nodes
. We need to setup our worker nodes next.
You can either continue using kubectl
on the master node or copy the config to your workstation (depending on how the network permissions are setup):
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
You’ll get a token command to run on workers from the previous step. However if we need to generate new tokens later on when we’re expanding our cluster, we can use kubeadm token list
and kubeadm token create
on the Master to get a new token. It will look similar to the following:
$ sudo kubeadm join 172.31.24.122:6443 --token ugl553.4dx5spnp..... --discovery-token-ca-cert-hash sha256:538dac3c113ea99a0c65d90fd679b8d330cb49e044.....
Important Note: Your worker nodes MUST have a unique hostname otherwise they will join the cluster and over-write each other (1st node will disappear and things will get rebalanced to the node you just joined). If this happens to you and you want to reset a node, you can run kubeadm reset
to wipe that worker node.
Back on the master node we can add our Flannel network overlay. This will let the pods reside on different worker nodes and communicate with each other over internal DNS and IPs.
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
After a few seconds you should see output from kubectl get nodes
similar to this (depending on hostnames):
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 4d v1.12.1
k8s-worker01 Ready <none> 4d v1.12.1
k8s-worker02 Ready <none> 4d v1.12.1
Up until this point we’ve just been using kubectl apply
and kubectl create
to install apps. We’ll be using Helm to manage our applications and install things going forward for the most part.
$ wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
$ tar zxvf helm-v2.11.0-linux-amd64.tar.gz
$ cd linux-amd64/
$ sudo cp helm /usr/local/bin
$ sudo helm init (=> this will also install Tiller)
Next we’re going to create a helm-rbac.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
Now we can apply everything:
$ kubectl create -f helm-rbac.yaml
$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --upgrade
$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
First we install the RBAC permissions, service accounts, and role bindings. Next we install Helm and initialize Tiller on the server. Tiller keeps track of which apps are deployed where and when they need updates. Finally we tell the Ttiller deployment about its' new ServiceAccount
.
You can verify things are working with a helm ls
.
Note: Helm is great, but sometimes it breaks. If your deployments/upgrades/deletes are hanging, try bouncing the Tiller pod:
$ kubectl delete po -n kube-system -l name=tiller
$ cd ~
$ wget https://openebs.github.io/charts/openebs-operator-0.7.0.yaml
$ kubectl get nodes
$ kubectl label nodes <node-name> node=openebs
$ nano openebs-operator-0.7.0.yaml
(add below section to openebs-provisioner, maya-apiserver, openebs-snapshot-operator, openebs-ndm - [just after spec: -> serviceAccountName: openebs-maya-operator])
nodeSelector:
node: openebs
# This manifest deploys the OpenEBS control plane components, with associated CRs & RBAC rules
# NOTE: On GKE, deploy the openebs-operator.yaml in admin context
# Create the OpenEBS namespace
apiVersion: v1
kind: Namespace
metadata:
name: openebs
---
# Create Maya Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: openebs-maya-operator
namespace: openebs
---
# Define Role that allows operations on K8s pods/deployments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: openebs-maya-operator
rules:
- apiGroups: ["*"]
resources: ["nodes", "nodes/proxy"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["namespaces", "services", "pods", "deployments", "events", "endpoints", "configmaps"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]
verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots", "volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: [ "get", "list", "create", "update", "delete"]
- apiGroups: ["*"]
resources: [ "disks"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "storagepoolclaims", "storagepools"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "castemplates", "runtasks"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorpools", "cstorvolumereplicas", "cstorvolumes"]
verbs: ["*" ]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
# Bind the Service Account with the Role Privileges.
# TODO: Check if default account also needs to be there
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: openebs-maya-operator
namespace: openebs
subjects:
- kind: ServiceAccount
name: openebs-maya-operator
namespace: openebs
- kind: User
name: system:serviceaccount:default:default
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: openebs-maya-operator
apiGroup: rbac.authorization.k8s.io
---
# This is the install related config. It specifies the version of openebs
# components i.e. custom operators that gets installed. This config is
# used by maya-apiserver.
apiVersion: v1
kind: ConfigMap
metadata:
name: maya-install-config
namespace: openebs
data:
install: |
spec:
install:
- version: "0.7.0"
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: maya-apiserver
namespace: openebs
spec:
replicas: 1
template:
metadata:
labels:
name: maya-apiserver
spec:
serviceAccountName: openebs-maya-operator
nodeSelector:
node: openebs
containers:
- name: maya-apiserver
imagePullPolicy: IfNotPresent
image: openebs/m-apiserver:0.7.0
ports:
- containerPort: 5656
env:
# OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for maya api server version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
# OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for maya api server version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://172.28.128.3:8080"
# OPENEBS_IO_INSTALL_CONFIG_NAME specifies the config map containing the install configuration.
# Currently, the configuration can be used to specifiy the default version for the CAS Templates
- name: OPENEBS_IO_INSTALL_CONFIG_NAME
value: "maya-install-config"
# OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be
# configured as a part of openebs installation.
# If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.
- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
value: "true"
# OPENEBS_NAMESPACE provides the namespace of this deployment as an
# environment variable
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as
# environment variable
- name: OPENEBS_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
# OPENEBS_MAYA_POD_NAME provides the name of this pod as
# environment variable
- name: OPENEBS_MAYA_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE
value: "openebs/jiva:0.7.0"
- name: OPENEBS_IO_JIVA_REPLICA_IMAGE
value: "openebs/jiva:0.7.0"
- name: OPENEBS_IO_JIVA_REPLICA_COUNT
value: "3"
- name: OPENEBS_IO_CSTOR_TARGET_IMAGE
value: "openebs/cstor-istgt:0.7.0"
- name: OPENEBS_IO_CSTOR_POOL_IMAGE
value: "openebs/cstor-pool:0.7.0"
- name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE
value: "openebs/cstor-pool-mgmt:0.7.0"
- name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE
value: "openebs/cstor-volume-mgmt:0.7.0"
- name: OPENEBS_IO_VOLUME_MONITOR_IMAGE
value: "openebs/m-exporter:0.7.0"
---
apiVersion: v1
kind: Service
metadata:
name: maya-apiserver-service
namespace: openebs
spec:
ports:
- name: api
port: 5656
protocol: TCP
targetPort: 5656
selector:
name: maya-apiserver
sessionAffinity: None
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: openebs-provisioner
namespace: openebs
spec:
replicas: 1
template:
metadata:
labels:
name: openebs-provisioner
spec:
serviceAccountName: openebs-maya-operator
nodeSelector:
node: openebs
containers:
- name: openebs-provisioner
imagePullPolicy: IfNotPresent
image: openebs/openebs-k8s-provisioner:0.7.0
env:
# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://10.128.0.12:8080"
# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that provisioner should forward the volume create/delete requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs provisioner version 0.5.3-RC1 onwards
#- name: OPENEBS_MAYA_SERVICE_NAME
# value: "maya-apiserver-apiservice"
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: openebs-snapshot-operator
namespace: openebs
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: openebs-snapshot-operator
spec:
serviceAccountName: openebs-maya-operator
nodeSelector:
node: openebs
containers:
- name: snapshot-controller
image: openebs/snapshot-controller:0.7.0
imagePullPolicy: IfNotPresent
env:
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that snapshot controller should forward the snapshot create/delete requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs provisioner version 0.5.3-RC1 onwards
#- name: OPENEBS_MAYA_SERVICE_NAME
# value: "maya-apiserver-apiservice"
- name: snapshot-provisioner
image: openebs/snapshot-provisioner:0.7.0
imagePullPolicy: IfNotPresent
env:
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that snapshot provisioner should forward the clone create/delete requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs provisioner version 0.5.3-RC1 onwards
#- name: OPENEBS_MAYA_SERVICE_NAME
# value: "maya-apiserver-apiservice"
---
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-ndm-config
namespace: openebs
data:
# udev-probe is default or primary probe which should be enabled to run ndm
# filterconfigs contails configs of filters - in ther form fo include
# and exclude comma separated strings
node-disk-manager.config: |
{
"probeconfigs": [
{
"key": "udev-probe",
"name": "udev probe",
"state": "true"
},
{
"key": "smart-probe",
"name": "smart probe",
"state": "true"
}
],
"filterconfigs": [
{
"key": "os-disk-exclude-filter",
"name": "os disk exclude filter",
"state": "true"
},
{
"key": "vendor-filter",
"name": "vendor filter",
"state": "true",
"include":"",
"exclude":"CLOUDBYT,OpenEBS"
},
{
"key": "path-filter",
"name": "path filter",
"state": "true",
"include":"",
"exclude":"loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-"
}
]
}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: openebs-ndm
namespace: openebs
spec:
template:
metadata:
labels:
name: openebs-ndm
spec:
# By default the node-disk-manager will be run on all kubernetes nodes
# If you would like to limit this to only some nodes, say the nodes
# that have storage attached, you could label those node and use
# nodeSelector.
#
# e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"
# kubectl label node <node-name> "openebs.io/nodegroup"="storage-node"
#nodeSelector:
# "openebs.io/nodegroup": "storage-node"
serviceAccountName: openebs-maya-operator
nodeSelector:
node: openebs
hostNetwork: true
containers:
- name: node-disk-manager
command:
- /usr/sbin/ndm
- start
image: openebs/node-disk-manager-amd64:v0.1.0
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
volumeMounts:
- name: config
mountPath: /host/node-disk-manager.config
subPath: node-disk-manager.config
readOnly: true
- name: udev
mountPath: /run/udev
- name: procmount
mountPath: /host/mounts
- name: sparsepath
mountPath: /var/openebs/sparse
env:
# pass hostname as env variable using downward API to the NDM container
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# specify the directory where the sparse files need to be created.
# if not specified, then sparse files will not be created.
- name: SPARSE_FILE_DIR
value: "/var/openebs/sparse"
# Size(bytes) of the sparse file to be created.
- name: SPARSE_FILE_SIZE
value: "10737418240"
# Specify the number of sparse files to be created
- name: SPARSE_FILE_COUNT
value: "1"
volumes:
- name: config
configMap:
name: openebs-ndm-config
- name: udev
hostPath:
path: /run/udev
type: Directory
# mount /proc/1/mounts (mount file of process 1 of host) inside container
# to read which partition is mounted on / path
- name: procmount
hostPath:
path: /proc/1/mounts
- name: sparsepath
hostPath:
path: /var/openebs/sparse
---
Then apply it:
$ kubectl apply -f openebs-operator-0.7.0.yaml
Get the Storage Class installed in our cluster by following command:
$ kubectl get sc
Following is an example output:
NAME PROVISIONER AGE
openebs-cstor-sparse openebs.io/provisioner-iscsi 6m
openebs-jiva-default openebs.io/provisioner-iscsi 6m
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 6m
standard (default) kubernetes.io/gce-pd 19m
We are using the default Storage Class which is created as part of openebs-operator-0.7.0.yaml installation. We modify our existing default Storage Class and edit it using kubectl
command using the following method:
$ kubectl edit sc openebs-jiva-default
Then we can add following entries as environment variables in our storage class:
- name: TargetNodeSelector
value: |-
node: appnode
- name: ReplicaNodeSelector
value: |-
node: openebs
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
cas.openebs.io/config: |
- name: ReplicaCount
value: "3"
- name: StoragePool
value: default
- name: TargetNodeSelector
value: |-
node: appnode
- name: ReplicaNodeSelector
value: |-
node: openebs
#- name: TargetResourceLimits
# value: |-
# memory: 1Gi
# cpu: 100m
#- name: AuxResourceLimits
# value: |-
# memory: 0.5Gi
# cpu: 50m
#- name: ReplicaResourceLimits
# value: |-
# memory: 2Gi
openebs.io/cas-type: jiva
In the next section we prepare OpenEBS for Helm install and OpenEBS-Jiva provisioning.
$ kubectl -n kube-system create sa tiller
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
$ kubectl -n kube-system patch deploy/tiller-deploy -p '{"spec": {"template": {"spec": {"serviceAccountName": "tiller"}}}}'
$ cd ~
$ git clone https://github.com/openebs/openebs.git
$ sudo helm package openebs/k8s/charts/openebs
$ git clone https://github.com/openebs/charts.git
$ cd charts
$ mv ../openebs-*.tgz ./docs
$ sudo helm repo index docs --url https://openebs.github.io/charts
$ sudo helm repo add openebs-charts https://openebs.github.io/charts/
$ sudo helm repo update
$ sudo helm install openebs-charts/openebs
Note: OpenEBS control plane pods are now created. CAS Template, default Storage Pool and default Storage Classes are created after executing the above command.
OpenEBS pods are created under “openebs” namespace. Node Disk Manager, CAS Template and default Storage Classes are created after installation. We are selecting Jiva as the storage engine.
You can get the OpenEBS pods status by running following command:
$ kubectl get pods -n openebs
Node Disk Manager manages the disks associated with each node in the cluster. You can get the disk details by running the following command:
$ kubectl get disk
CAS Template is an approach to provision persistent volumes that make use of CAS storage engine. The following command helps check the CAS Template components:
$ kubectl get castemplate
Also, it installs the default Jiva storage class which can be used in your application yaml to run the application. You can get the storage classes that are already applied by using the following command:
$ kubectl get sc
The following is an example output:
NAME PROVISIONER AGE
openebs-cstor-sparse openebs.io/provisioner-iscsi 8m
openebs-jiva-default openebs.io/provisioner-iscsi 8m
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 8m
standard (default) kubernetes.io/gce-pd 29m
OpenEBS installation will create Jiva storage pool also.It will be created by default on "/var/openebs" inside the hosted path on the nodes.
You can get the storage pool details by running the following command.
$ kubectl get sp
We have now deployed OpenEBS cluster with Jiva Engine. It can create OpenEBS Jiva volume on default storage pool. By default, OpenEBS Jiva volume runs with 3 replicas.
Apply the sample pvc .yaml file to create Jiva using the following command:
$ kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/pvc-standard-jiva-default.yaml
Get the pvc details by running the following command:
$ kubectl get pvc
The following is an example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
demo-vol1-claim Bound default-demo-vol1-claim-473439503 G RWO openebs-jiva-default m
Get the pv details by running the following command:
$ kubectl get pv
The following is an example output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
default-demo-vol1-claim-473439503 4G RWO Delete Bound default/demo-vol1-claim openebs-jiva-default 7m
Note: Use this pvc name in your application yaml to run your application using OpenEBS Jiva volume.
The StatefulSet specification JSONs are available at openebs/k8s/demo/crunchy-postgres.
The number of replicas in the StatefulSet can be modified in the set.json file. The following example uses two replicas, which includes one master and one slave. The Postgres pods are configured as primary/master or as replica/slave by a startup script which decides the role based on ordinality assigned to the pod.
{
"apiVersion": "apps/v1beta1",
"kind": "StatefulSet",
"metadata": {
"name": "pgset"
},
"spec": {
"serviceName": "pgset",
"replicas": 2,
"template": {
"metadata": {
"labels": {
"app": "pgset"
}
},
"spec": {
"securityContext":
{
"fsGroup": 26
},
"containers": [
{
"name": "pgset",
"image": "crunchydata/crunchy-postgres:centos7-10.0-1.6.0",
"ports": [
{
"containerPort": 5432,
"name": "postgres"
}
],
"env": [{
"name": "PG_PRIMARY_USER",
"value": "primaryuser"
}, {
"name": "PGHOST",
"value": "/tmp"
}, {
"name": "PG_MODE",
"value": "set"
}, {
"name": "PG_PRIMARY_PASSWORD",
"value": "password"
}, {
"name": "PG_USER",
"value": "testuser"
}, {
"name": "PG_PASSWORD",
"value": "password"
}, {
"name": "PG_DATABASE",
"value": "userdb"
}, {
"name": "PG_ROOT_PASSWORD",
"value": "password"
}, {
"name": "PG_PRIMARY_PORT",
"value": "5432"
}, {
"name": "PG_PRIMARY_HOST",
"value": "pgset-primary"
}],
"volumeMounts": [
{
"name": "pgdata",
"mountPath": "/pgdata",
"readOnly": false
}
]
}
]
}
},
"volumeClaimTemplates": [
{
"metadata": {
"name": "pgdata"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "openebs-jiva-default",
"resources": {
"requests": {
"storage": "400M"
}
}
}
}
]
}
}
Note: Make sure the storageClassName
above matches the one from kubectl get pvc
. In our example it's openebs-jiva-default.
Run the following commands:
$ cd openebs/k8s/demo/crunchy-postgres/
$ ls -ltr
total 32
-rw-rw-r-- 1 test test 300 Nov 14 16:27 set-service.json
-rw-rw-r-- 1 test test 97 Nov 14 16:27 set-sa.json
-rw-rw-r-- 1 test test 558 Nov 14 16:27 set-replica-service.json
-rw-rw-r-- 1 test test 555 Nov 14 16:27 set-master-service.json
-rw-rw-r-- 1 test test 1879 Nov 14 16:27 set.json
-rwxrwxr-x 1 test test 1403 Nov 14 16:27 run.sh
-rw-rw-r-- 1 test test 1292 Nov 14 16:27 README.md
-rwxrwxr-x 1 test test 799 Nov 14 16:27 cleanup.sh
$ ./run.sh
+++ dirname ./run.sh
++ cd .
++ pwd
+ DIR=/home/test/openebs/k8s/demo/crunchy-postgres
+ kubectl create -f /home/test/openebs/k8s/demo/crunchy-postgres/set-sa.json
serviceaccount "pgset-sa" created
+ kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts
clusterrolebinding "permissive-binding" created
+ kubectl create -f /home/test/openebs/k8s/demo/crunchy-postgres/set-service.json
service "pgset" created
+ kubectl create -f /home/test/openebs/k8s/demo/crunchy-postgres/set-primary-service.json
service "pgset-primary" created
+ kubectl create -f /home/test/openebs/k8s/demo/crunchy-postgres/set-replica-service.json
service "pgset-replica" created
+ kubectl create -f /home/test/openebs/k8s/demo/crunchy-postgres/set.json
statefulset "pgset" created
Verify that all the OpenEBS persistent volumes are created and that the Crunchy-Postgres services and pods are running using the following commands:
$ kubectl get statefulsets
NAME DESIRED CURRENT AGE
pgset 2 2 15m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
maya-apiserver-2245240594-ktfs2 1/1 Running 0 3h
openebs-provisioner-4230626287-t8pn9 1/1 Running 0 3h
pgset-0 1/1 Running 0 3m
pgset-1 1/1 Running 0 3m
pvc-17e21bd3-c948-11e7-a157-000c298ff5fc-ctrl-3572426415-n8ctb 1/1 Running 0 3m
pvc-17e21bd3-c948-11e7-a157-000c298ff5fc-rep-3113668378-9437w 1/1 Running 0 3m
pvc-17e21bd3-c948-11e7-a157-000c298ff5fc-rep-3113668378-xnt12 1/1 Running 0 3m
pvc-1e96a86b-c948-11e7-a157-000c298ff5fc-ctrl-2773298268-x3dlb 1/1 Running 0 3m
pvc-1e96a86b-c948-11e7-a157-000c298ff5fc-rep-723453814-hpkw3 1/1 Running 0 3m
pvc-1e96a86b-c948-11e7-a157-000c298ff5fc-rep-723453814-tpjqm 1/1 Running 0 3m
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 4h
maya-apiserver-service 10.98.249.191 <none> 5656/TCP 3h
pgset None <none> 5432/TCP 14m
pgset-primary 10.104.32.113 <none> 5432/TCP 14m
pgset-replica 10.99.40.69 <none> 5432/TCP 14m
pvc-17e21bd3-c948-11e7-a157-000c298ff5fc-ctrl-svc 10.111.243.121 <none> 3260/TCP,9501/TCP 14m
pvc-1e96a86b-c948-11e7-a157-000c298ff5fc-ctrl-svc 10.102.138.94 <none> 3260/TCP,9501/TCP 13m
$ kubectl get clusterrolebinding permissive-binding
NAME AGE
permissive-binding 15m
Note: It may take some time for the pods to start as the images must be pulled and instantiated. This is also dependent on the network speed.
You can verify the deployment using the following procedure.
- Check cluster replication status between the Postgres primary and replica pods
- Create a table in the default database as Postgres user testuser on the primary pod
- Check data synchronization on the replica pod for the table you have created
- Verify that the table is not created on the replica pod
Install the PostgreSQL CLient Utility (psql) on any of the Kubernetes machines to perform database operations from the command line.
$ sudo apt-get install postgresql-client -y
Identify the IP Address of the primary (pgset-0) pod or the service (pgset-primary) and execute the following query:
$ kubectl describe pod pgset-0 | grep IP
IP: 10.47.0.3
$ psql -h 10.47.0.3 -U testuser postgres -c 'select * from pg_stat_replication'
pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | backend_xmin | state | sent_lsn | write_lsn | flush_lsn | replay_lsn | write_lag
| flush_lag | replay_lag | sync_priority | sync_state
-----+----------+-------------+------------------+-------------+-----------------+------------+-------------------------------+--------------+-----------+-----------+-----------+-----------+------------+-----------+-----------+------------+---------------+------------
94 | 16391 | primaryuser | pgset-1 | 10.44.0.0 | | 60460 | 2017-11-14 09:29:21.990782-05 | |streaming | 0/3014278 | 0/3014278 | 0/3014278 | 0/3014278 | | | | 0 | async (1 row)
The replica should be registered for asynchronous replication.
The following queries should be executed on the primary pod.
$ psql -h 10.47.0.3 -U testuser postgres -c 'create table foo(id int)'
Password for user testuser:
CREATE TABLE
$ psql -h 10.47.0.3 -U testuser postgres -c 'insert into foo values (1)'
Password for user testuser:
INSERT 0 1
Identify the IP Address of the replica (pgset-1) pod or the service (pgset-replica) and execute the following command.
$ kubectl describe pod pgset-1 | grep IP
IP: 10.44.0.6
$ psql -h 10.44.0.6 -U testuser postgres -c 'table foo'
Password for user testuser:
id
---
1
(1 row)
Verify that the table content is replicated successfully.
Attempt to create a new table on the replica, and verify that the creation is unsuccessful. :
$ psql -h 10.44.0.6 -U testuser postgres -c 'create table bar(id int)'
Password for user testuser:
ERROR: cannot execute CREATE TABLE in a read-only transaction