From 0aa61ec781efeaaa2b2c69c1b8a41cd547ceb558 Mon Sep 17 00:00:00 2001 From: Tuna Date: Wed, 14 Feb 2018 00:06:55 +0700 Subject: [PATCH 1/2] add a doc for existing kafka --- docs/README.md | 1 + docs/use-existing-kafka.md | 64 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 65 insertions(+) create mode 100644 docs/use-existing-kafka.md diff --git a/docs/README.md b/docs/README.md index 23691480e..d73d2a0d7 100644 --- a/docs/README.md +++ b/docs/README.md @@ -7,6 +7,7 @@ - [Running on azure container services](kubeless-on-azure-container-services.md) - [Monitoring](monitoring.md) - [Autoscaling](autoscaling.md) +- [Use existing Kafka cluster](use-existing-kafka.md) ## Development diff --git a/docs/use-existing-kafka.md b/docs/use-existing-kafka.md new file mode 100644 index 000000000..d952d7654 --- /dev/null +++ b/docs/use-existing-kafka.md @@ -0,0 +1,64 @@ +# How to deploy Kubeless PubSub function with an existing Kafka cluster in Kubernetes + +In Kubeless [release page](https://github.com/kubeless/kubeless/releases), we provide along with Kubeless manifests a collection of Kafka and Zookeeper statefulsets which helps user to quickly deploying PubSub function. These statefulsets are deployed in `kubeless` namespace. However, if you have a Kafka cluster already running in the same Kubernetes cluster, this doc will walk you through how to deploy Kubeless PubSub function with it. + +Let's assume that you have Kafka cluster running at `pubsub` namespace like below: + +``` +$ kubectl -n pubsub get po +NAME READY STATUS RESTARTS AGE +kafka-0 1/1 Running 0 7h +zoo-0 1/1 Running 0 7h + +$ kubectl -n pubsub get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kafka ClusterIP 10.55.253.151 9092/TCP 7h +zookeeper ClusterIP 10.55.248.146 2181/TCP 7h +``` + +And Kubeless already running at `kubeless` namespace: + +``` +$ kubectl -n kubeless get po +NAME READY STATUS RESTARTS AGE +kubeless-controller-58676964bb-l79gh 1/1 Running 0 5d +``` + +Kubeless provide several [PubSub runtimes](https://hub.docker.com/r/kubeless/),which has suffix `event-consumer`, specified for languages that help you to quickly deploy your function with PubSub mechanism. Those runtimes are configured to read Kafka configuration at two environment variables: + +- KUBELESS_KAFKA_SVC: which points to kafka service name in Kubernetes cluster. +- KUBELESS_KAFKA_NAMESPACE: which declares the namespace that Kafka is running on. + +In this example, when deploying function we will declare two environment variables `KUBELESS_KAFKA_SVC=kafka` and `KUBELESS_KAFKA_NAMESPACE=pubsub`. + +We now try to deploy a provided function in `examples` folder with command as below: + +``` +$ kubeless function deploy pubsub-python --trigger-topic s3-python --runtime python2.7 --handler pubsub.handler --from-file examples/python/pubsub.py --env KUBELESS_KAFKA_SVC=kafka --env KUBELESS_KAFKA_NAMESPACE=pubsub +``` + +The `pubsub-python` function will just print out messages it receive from `s3-python` topic. Checking if the function is up and running: + +``` +$ kubectl get po +NAME READY STATUS RESTARTS AGE +pubsub-python-5445bdcb64-48bv2 1/1 Running 0 4s +``` + +Now we need to create `s3-python` topic and try to publish some messages. You can do it on your own kafka client. In this example, I will try to use the bundled binaries in the kafka container: + +``` +# create s3-python topic +$ kubectl -n pubsub exec -it kafka-0 -- /opt/bitnami/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper.pubsub:2181 --replication-factor 1 --partitions 1 --topic s3-python + +# send test message to s3-python topic +$ kubectl -n pubsub exec -it kafka-0 -- /opt/bitnami/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic s3-python +> hello world +``` + +Open another terminal and check for the pubsub function log to see if it receives the message: + +``` +$ kubectl logs -f pubsub-python-5445bdcb64-48bv2 +hello world +``` From 284985a639acdf89ca510b027d9648f7f12015d5 Mon Sep 17 00:00:00 2001 From: Tuna Date: Wed, 14 Feb 2018 23:35:08 +0700 Subject: [PATCH 2/2] link to the doc on README --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 33b9c48ca..0e8367bd8 100644 --- a/README.md +++ b/README.md @@ -80,7 +80,7 @@ You are now ready to create functions. You can use the CLI to create a function. Functions have three possible types: * http triggered (function will expose an HTTP endpoint) -* pubsub triggered (function will consume event on a specific topic; kafka/zookeeper statefulsets are required) +* pubsub triggered (function will consume event on a specific topic; a running kafka cluster on your k8s is required) * schedule triggered (function will be called on a cron schedule) ### HTTP function @@ -159,7 +159,7 @@ Kubeless also supports [ingress](https://kubernetes.io/docs/concepts/services-ne ### PubSub function -We provide several [PubSub runtimes](https://hub.docker.com/r/kubeless/),which has suffix `event-consumer`, specified for languages that help you to quickly deploy your function with PubSub mechanism. The PubSub function will expect to consume input messages from a predefined Kafka topic which means Kafka is required. In Kubeless [release page](https://github.com/kubeless/kubeless/releases), you can find the manifest to quickly deploy a collection of Kafka and Zookeeper statefulsets. +We provide several [PubSub runtimes](https://hub.docker.com/r/kubeless/), which has suffix `event-consumer`, specified for languages that help you to quickly deploy your function with PubSub mechanism. The PubSub function will expect to consume input messages from a predefined Kafka topic which means Kafka is required. In Kubeless [release page](https://github.com/kubeless/kubeless/releases), you can find the manifest to quickly deploy a collection of Kafka and Zookeeper statefulsets. If you have a Kafka cluster already running in the same Kubernetes environment, you can also deploy PubSub function with it. Check out [this tutorial](./docs/use-existing-kafka.md) for more details how to do that. Once deployed, you can verify two statefulsets up and running: