Skip to content
This repository has been archived by the owner on Dec 15, 2021. It is now read-only.

Commit

Permalink
Merge pull request #570 from ngtuna/kafka-optional
Browse files Browse the repository at this point in the history
kafka installation as optional
  • Loading branch information
ngtuna authored Feb 14, 2018
2 parents 62f3d38 + ee9bad1 commit c667d0c
Show file tree
Hide file tree
Showing 9 changed files with 243 additions and 204 deletions.
5 changes: 3 additions & 2 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,9 @@ before_install:
# or if the build is from the "master" branch
minikube_kafka)
if [[ "$TRAVIS_PULL_REQUEST" != false ]]; then
pr_kafka_title=$(curl -H "Authorization: token ${GITHUB_TOKEN}" "https://api.github.com/repos/$TRAVIS_REPO_SLUG/pulls/${TRAVIS_PULL_REQUEST}" | grep title | grep -i kafka || true)
pr_kafka_title=$(curl "https://api.github.com/repos/$TRAVIS_REPO_SLUG/pulls/${TRAVIS_PULL_REQUEST}" | grep title || true)
fi
if [[ "$TRAVIS_PULL_REQUEST" == false || "$pr_kafka_title" != "" ]]; then
if [[ "$TRAVIS_PULL_REQUEST" == false || "$pr_kafka_title" == "" || "$pr_kafka_title" =~ ^.*(Kafka|kafka|KAFKA).*$ ]]; then
export SHOULD_TEST=1
fi
;;
Expand Down Expand Up @@ -162,6 +162,7 @@ deploy:
- kubeless-${TRAVIS_TAG}.yaml
- kubeless-rbac-${TRAVIS_TAG}.yaml
- kubeless-openshift-${TRAVIS_TAG}.yaml
- kafka-zookeeper-${TRAVIS_TAG}.yaml
- bundles/kubeless_*.zip
skip_cleanup: true
overwrite: true
Expand Down
4 changes: 3 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -37,14 +37,16 @@ binary-cross:
$(KUBECFG) show -o yaml $< > $@.tmp
mv $@.tmp $@

all-yaml: kubeless.yaml kubeless-rbac.yaml kubeless-openshift.yaml
all-yaml: kubeless.yaml kubeless-rbac.yaml kubeless-openshift.yaml kafka-zookeeper.yaml

kubeless.yaml: kubeless.jsonnet

kubeless-rbac.yaml: kubeless-rbac.jsonnet kubeless.jsonnet

kubeless-openshift.yaml: kubeless-openshift.jsonnet kubeless-rbac.jsonnet

kafka-zookeeper.yaml: kafka-zookeeper.jsonnet

docker/controller: controller-build
cp $(BUNDLES)/kubeless_$(OS)-$(ARCH)/kubeless-controller $@

Expand Down
41 changes: 28 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,17 @@ Installation is made of three steps:

* Download the `kubeless` CLI from the [release page](https://github.com/kubeless/kubeless/releases). (OSX users can also use [brew](https://brew.sh/): `brew install kubeless`).
* Create a `kubeless` namespace (used by default)
* Then use one of the YAML manifests found in the release page to deploy kubeless. It will create a _functions_ Custom Resource Definition and launch a controller. You will see a _kubeless_ controller, a _kafka_ and a _zookeeper_ statefulset appear and shortly get in running state.
* Then use one of the YAML manifests found in the release page to deploy kubeless. It will create a _functions_ Custom Resource Definition and launch a controller.

There are several kubeless manifests being shipped for multiple k8s environments (non-rbac, rbac and openshift), pick the one that corresponds to your environment:

* [`kubeless-$RELEASE.yaml`](https://github.com/kubeless/kubeless/releases/download/v0.2.4/kubeless-v0.2.4.yaml) is used for non-RBAC Kubernetes cluster.
* [`kubeless-rbac-$RELEASE.yaml`](https://github.com/kubeless/kubeless/releases/download/v0.2.4/kubeless-rbac-v0.2.4.yaml) is used for RBAC-enabled Kubernetes cluster.
* [`kubeless-openshift-$RELEASE.yaml`](https://github.com/kubeless/kubeless/releases/download/v0.2.4/kubeless-openshift-v0.2.4.yaml) is used to deploy Kubeless to OpenShift (1.5+).
* [`kubeless-$RELEASE.yaml`](https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless-$RELEASE.yaml) is used for non-RBAC Kubernetes cluster.
* [`kubeless-rbac-$RELEASE.yaml`](https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless-rbac-$RELEASE.yaml) is used for RBAC-enabled Kubernetes cluster.
* [`kubeless-openshift-$RELEASE.yaml`](https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless-openshift-$RELEASE.yaml) is used to deploy Kubeless to OpenShift (1.5+).

We also provide an optional `kafka-zookeeper` statefulset manifest to give you a handy option to try out the PubSub mechanism.

* [`kafka-zookeeper-$RELEASE.yaml`](https://github.com/kubeless/kubeless/releases/download/$RELEASE/kafka-zookeeper-$RELEASE.yaml)

For example, this below is a show case of deploying kubeless to a non-RBAC Kubernetes cluster.

Expand All @@ -47,19 +51,12 @@ $ kubectl create -f https://github.com/kubeless/kubeless/releases/download/$RELE

$ kubectl get pods -n kubeless
NAME READY STATUS RESTARTS AGE
kafka-0 1/1 Running 0 1m
kubeless-controller-3331951411-d60km 1/1 Running 0 1m
zoo-0 1/1 Running 0 1m

$ kubectl get deployment -n kubeless
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kubeless-controller 1 1 1 1 1m

$ kubectl get statefulset -n kubeless
NAME DESIRED CURRENT AGE
kafka 1 1 1m
zoo 1 1 1m

$ kubectl get customresourcedefinition
NAME KIND
functions.k8s.io CustomResourceDefinition.v1beta1.apiextensions.k8s.io
Expand All @@ -83,7 +80,7 @@ You are now ready to create functions.
You can use the CLI to create a function. Functions have three possible types:

* http triggered (function will expose an HTTP endpoint)
* pubsub triggered (function will consume event on a specific topic)
* pubsub triggered (function will consume event on a specific topic; kafka/zookeeper statefulsets are required)
* schedule triggered (function will be called on a cron schedule)

### HTTP function
Expand Down Expand Up @@ -162,7 +159,25 @@ Kubeless also supports [ingress](https://kubernetes.io/docs/concepts/services-ne

### PubSub function

A function can be as simple as:
We provide several [PubSub runtimes](https://hub.docker.com/r/kubeless/),which has suffix `event-consumer`, specified for languages that help you to quickly deploy your function with PubSub mechanism. The PubSub function will expect to consume input messages from a predefined Kafka topic which means Kafka is required. In Kubeless [release page](https://github.com/kubeless/kubeless/releases), you can find the manifest to quickly deploy a collection of Kafka and Zookeeper statefulsets.

Once deployed, you can verify two statefulsets up and running:

```console
$ kubectl -n kubeless get statefulset
NAME DESIRED CURRENT AGE
kafka 1 1 40s
zoo 1 1 42s

$ kubectl -n kubeless get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
broker ClusterIP None <none> 9092/TCP 1m
kafka ClusterIP 10.55.250.89 <none> 9092/TCP 1m
zoo ClusterIP None <none> 9092/TCP,3888/TCP 1m
zookeeper ClusterIP 10.55.249.102 <none> 2181/TCP 1m
```

Now you can deploy a pubsub function. A function can be as simple as:

```python
def foobar(context):
Expand Down
5 changes: 5 additions & 0 deletions docs/GKE-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,11 @@ export KUBELESS_VERSION=<latest version>
kubectl create namespace kubeless
kubectl create -f https://github.com/kubeless/kubeless/releases/download/v$KUBELESS_VERSION/kubeless-rbac-v$KUBELESS_VERSION.yaml
```
Optionally, if you want to go with PubSub function, please also deploy our provided Kafka/Zookeeper system:

```
kubectl create -f https://github.com/kubeless/kubeless/releases/download/v$KUBELESS_VERSION/kafka-zookeeper-v$KUBELESS_VERSION.yaml
```

## Kubeless on GKE 1.8.x

Expand Down
196 changes: 196 additions & 0 deletions kafka-zookeeper.jsonnet
Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
local k = import "ksonnet.beta.1/k.libsonnet";

local statefulset = k.apps.v1beta1.statefulSet;
local container = k.core.v1.container;
local service = k.core.v1.service;

local namespace = "kubeless";
local controller_account_name = "controller-acct";

local kafkaEnv = [
{
name: "KAFKA_ADVERTISED_HOST_NAME",
value: "broker.kubeless"
},
{
name: "KAFKA_ADVERTISED_PORT",
value: "9092"
},
{
name: "KAFKA_PORT",
value: "9092"
},
{
name: "KAFKA_DELETE_TOPIC_ENABLE",
value: "true"
},
{
name: "KAFKA_ZOOKEEPER_CONNECT",
value: "zookeeper.kubeless:2181"
},
{
name: "ALLOW_PLAINTEXT_LISTENER",
value: "yes"
}
];

local zookeeperEnv = [
{
name: "ZOO_SERVERS",
value: "server.1=zoo-0.zoo:2888:3888:participant"
},
{
name: "ALLOW_ANONYMOUS_LOGIN",
value: "yes"
}
];

local zookeeperPorts = [
{
containerPort: 2181,
name: "client"
},
{
containerPort: 2888,
name: "peer"
},
{
containerPort: 3888,
name: "leader-election"
}
];

local kafkaContainer =
container.default("broker", "bitnami/kafka@sha256:0c4be25cd3b31176a4c738da64d988d614b939021bedf7e1b0cc72b37a071ecb") +
container.imagePullPolicy("IfNotPresent") +
container.env(kafkaEnv) +
container.ports({containerPort: 9092}) +
container.livenessProbe({tcpSocket: {port: 9092}, initialDelaySeconds: 30}) +
container.volumeMounts([
{
name: "datadir",
mountPath: "/bitnami/kafka/data"
}
]);

local kafkaInitContainer =
container.default("volume-permissions", "busybox") +
container.imagePullPolicy("IfNotPresent") +
container.command(["sh", "-c", "chmod -R g+rwX /bitnami"]) +
container.volumeMounts([
{
name: "datadir",
mountPath: "/bitnami/kafka/data"
}
]);

local zookeeperContainer =
container.default("zookeeper", "bitnami/zookeeper@sha256:f66625a8a25070bee18fddf42319ec58f0c49c376b19a5eb252e6a4814f07123") +
container.imagePullPolicy("IfNotPresent") +
container.env(zookeeperEnv) +
container.ports(zookeeperPorts) +
container.volumeMounts([
{
name: "zookeeper",
mountPath: "/bitnami/zookeeper"
}
]);

local zookeeperInitContainer =
container.default("volume-permissions", "busybox") +
container.imagePullPolicy("IfNotPresent") +
container.command(["sh", "-c", "chmod -R g+rwX /bitnami"]) +
container.volumeMounts([
{
name: "zookeeper",
mountPath: "/bitnami/zookeeper"
}
]);

local kafkaLabel = {kubeless: "kafka"};
local zookeeperLabel = {kubeless: "zookeeper"};

local kafkaVolumeCT = [
{
"metadata": {
"name": "datadir"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "1Gi"
}
}
}
}
];

local zooVolumeCT = [
{
"metadata": {
"name": "zookeeper"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "1Gi"
}
}
}
}
];

local kafkaSts =
statefulset.default("kafka", namespace) +
statefulset.spec({serviceName: "broker"}) +
{spec+: {template: {metadata: {labels: kafkaLabel}}}} +
{spec+: {volumeClaimTemplates: kafkaVolumeCT}} +
{spec+: {template+: {spec: {containers: [kafkaContainer], initContainers: [kafkaInitContainer]}}}};

local zookeeperSts =
statefulset.default("zoo", namespace) +
statefulset.spec({serviceName: "zoo"}) +
{spec+: {template: {metadata: {labels: zookeeperLabel}}}} +
{spec+: {volumeClaimTemplates: zooVolumeCT}} +
{spec+: {template+: {spec: {containers: [zookeeperContainer], initContainers: [zookeeperInitContainer]}}}};

local kafkaSvc =
service.default("kafka", namespace) +
service.spec(k.core.v1.serviceSpec.default()) +
service.mixin.spec.ports({port: 9092}) +
service.mixin.spec.selector({kubeless: "kafka"});

local kafkaHeadlessSvc =
service.default("broker", namespace) +
service.spec(k.core.v1.serviceSpec.default()) +
service.mixin.spec.ports({port: 9092}) +
service.mixin.spec.selector({kubeless: "kafka"}) +
{spec+: {clusterIP: "None"}};

local zookeeperSvc =
service.default("zookeeper", namespace) +
service.spec(k.core.v1.serviceSpec.default()) +
service.mixin.spec.ports({port: 2181, name: "client"}) +
service.mixin.spec.selector({kubeless: "zookeeper"});

local zookeeperHeadlessSvc =
service.default("zoo", namespace) +
service.spec(k.core.v1.serviceSpec.default()) +
service.mixin.spec.ports([{port: 9092, name: "peer"},{port: 3888, name: "leader-election"}]) +
service.mixin.spec.selector({kubeless: "zookeeper"}) +
{spec+: {clusterIP: "None"}};

{
kafkaSts: k.util.prune(kafkaSts),
zookeeperSts: k.util.prune(zookeeperSts),
kafkaSvc: k.util.prune(kafkaSvc),
kafkaHeadlessSvc: k.util.prune(kafkaHeadlessSvc),
zookeeperSvc: k.util.prune(zookeeperSvc),
zookeeperHeadlessSvc: k.util.prune(zookeeperHeadlessSvc),
}
2 changes: 0 additions & 2 deletions kubeless-openshift.jsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,4 @@ kubeless + {
controller: kubeless.controller + { apiVersion: "extensions/v1beta1" },
controllerClusterRole: kubeless.controllerClusterRole + { apiVersion: "v1" },
controllerClusterRoleBinding: kubeless.controllerClusterRoleBinding + { apiVersion: "v1" },
kafkaSts: kubeless.kafkaSts + {spec+: {template+: {spec+: { initContainers: [] }}}},
zookeeperSts: kubeless.zookeeperSts + {spec+: {template+: {spec+: { initContainers: [] }}}}
}
Loading

0 comments on commit c667d0c

Please sign in to comment.