Skip to content
This repository has been archived by the owner on Mar 26, 2020. It is now read-only.

Containers

Kaushal M edited this page Sep 7, 2018 · 1 revision

Container images are built nightly with the latest nightly RPMs of GD2 and GlusterFS. The build containers are available on Docker hub at gluster/glusterd2-nightly.

Usage with Docker

Pull the container image

$ docker pull gluster/glusterd2-nightly

Start the container

$ docker run --name gd2 -d --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro gluster/glusterd2-nightly

Exec into the container and run glustercli

$ docker exec -t -i gd2 bash
[root@3762444f8998 /]# glustercli peer status
+--------------------------------------+--------------+------------------+------------------+--------+-----+
|                  ID                  |     NAME     | CLIENT ADDRESSES |  PEER ADDRESSES  | ONLINE | PID |
+--------------------------------------+--------------+------------------+------------------+--------+-----+
| c5e4dd2f-d08a-4a79-9f15-6a0ef4e9bc1f | 3762444f8998 | 127.0.0.1:24007  | 172.17.0.3:24008 | yes    |  25 |
|                                      |              | 172.17.0.3:24007 |                  |        |     |
+--------------------------------------+--------------+------------------+------------------+--------+-----+
[root@3762444f8998 /]# 

Usage with Kubernetes

TODO: Add details on setting up etcd cluster

Launch GD2 pods on all nodes and create a service endpoint for GD2 with the following configuraiton file.

---
apiVersion: v1
kind: Namespace
metadata:
  name: gluster-storage
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: gluster
  namespace: gluster-storage
  labels:
    gluster-storage: glusterd2
spec:
  selector:
    matchLabels:
      name: glusterd2-daemon
  template:
    metadata:
      labels:
        name: glusterd2-daemon
    spec:
      containers:
        - name: glusterd2
          image: docker.io/gluster/glusterd2-nightly:latest
# TODO: Enable the below once passing environment variables to the containers is fixed
#          env:
#            - name: GD2_RESTAUTH
#              value: "false"
# Enable if an external etcd cluster has been set up etcd
#            - name: GD2_ETCDENDPOINTS
#              value: "http://gluster-etcd:2379"
# Generate and set a random uuid here
#            - name: GD2_CLUSTER_ID
#              value: "9610ec0b-17e7-405e-82f7-5f78d0b22463"
          securityContext:
            capabilities: {}
            privileged: true
          volumeMounts:
            - name: gluster-dev
              mountPath: "/dev"
            - name: gluster-cgroup
              mountPath: "/sys/fs/cgroup"
              readOnly: true
            - name: gluster-lvm
              mountPath: "/run/lvm"
            - name: gluster-kmods
              mountPath: "/usr/lib/modules"
              readOnly: true

      volumes:
        - name: gluster-dev
          hostPath:
            path: "/dev"
        - name: gluster-cgroup
          hostPath:
            path: "/sys/fs/cgroup"
        - name: gluster-lvm
          hostPath:
            path: "/run/lvm"
        - name: gluster-kmods
          hostPath:
            path: "/usr/lib/modules"

---
apiVersion: v1
kind: Service
metadata:
  name: glusterd2-service
  namespace: gluster-storage
spec:
  selector:
    name: glusterd2-daemon
  ports:
    - protocol: TCP
      port: 24007
      targetPort: 24007
# GD2 will be available on kube-host:31007 externally
      nodePort: 31007
  type: NodePort

Deploy on kubernetes

$ kubectl create -f gd2.yaml

Exec into the created pods to run glustercli

$ kubectl --namespace=gluster-storage exec -t -i gluster-knx2z bash
Clone this wiki locally