Skip to content
This repository has been archived by the owner on Dec 15, 2021. It is now read-only.

Commit

Permalink
Merge pull request #591 from ngtuna/kafka-doc
Browse files Browse the repository at this point in the history
Add documentation to explain how to use an existing Kafka cluster
  • Loading branch information
arapulido authored Feb 15, 2018
2 parents c667d0c + 284985a commit 52f1cee
Show file tree
Hide file tree
Showing 3 changed files with 67 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ You are now ready to create functions.
You can use the CLI to create a function. Functions have three possible types:

* http triggered (function will expose an HTTP endpoint)
* pubsub triggered (function will consume event on a specific topic; kafka/zookeeper statefulsets are required)
* pubsub triggered (function will consume event on a specific topic; a running kafka cluster on your k8s is required)
* schedule triggered (function will be called on a cron schedule)

### HTTP function
Expand Down Expand Up @@ -159,7 +159,7 @@ Kubeless also supports [ingress](https://kubernetes.io/docs/concepts/services-ne

### PubSub function

We provide several [PubSub runtimes](https://hub.docker.com/r/kubeless/),which has suffix `event-consumer`, specified for languages that help you to quickly deploy your function with PubSub mechanism. The PubSub function will expect to consume input messages from a predefined Kafka topic which means Kafka is required. In Kubeless [release page](https://github.com/kubeless/kubeless/releases), you can find the manifest to quickly deploy a collection of Kafka and Zookeeper statefulsets.
We provide several [PubSub runtimes](https://hub.docker.com/r/kubeless/), which has suffix `event-consumer`, specified for languages that help you to quickly deploy your function with PubSub mechanism. The PubSub function will expect to consume input messages from a predefined Kafka topic which means Kafka is required. In Kubeless [release page](https://github.com/kubeless/kubeless/releases), you can find the manifest to quickly deploy a collection of Kafka and Zookeeper statefulsets. If you have a Kafka cluster already running in the same Kubernetes environment, you can also deploy PubSub function with it. Check out [this tutorial](./docs/use-existing-kafka.md) for more details how to do that.

Once deployed, you can verify two statefulsets up and running:

Expand Down
1 change: 1 addition & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
- [Running on azure container services](kubeless-on-azure-container-services.md)
- [Monitoring](monitoring.md)
- [Autoscaling](autoscaling.md)
- [Use existing Kafka cluster](use-existing-kafka.md)

## Development

Expand Down
64 changes: 64 additions & 0 deletions docs/use-existing-kafka.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# How to deploy Kubeless PubSub function with an existing Kafka cluster in Kubernetes

In Kubeless [release page](https://github.com/kubeless/kubeless/releases), we provide along with Kubeless manifests a collection of Kafka and Zookeeper statefulsets which helps user to quickly deploying PubSub function. These statefulsets are deployed in `kubeless` namespace. However, if you have a Kafka cluster already running in the same Kubernetes cluster, this doc will walk you through how to deploy Kubeless PubSub function with it.

Let's assume that you have Kafka cluster running at `pubsub` namespace like below:

```
$ kubectl -n pubsub get po
NAME READY STATUS RESTARTS AGE
kafka-0 1/1 Running 0 7h
zoo-0 1/1 Running 0 7h
$ kubectl -n pubsub get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka ClusterIP 10.55.253.151 <none> 9092/TCP 7h
zookeeper ClusterIP 10.55.248.146 <none> 2181/TCP 7h
```

And Kubeless already running at `kubeless` namespace:

```
$ kubectl -n kubeless get po
NAME READY STATUS RESTARTS AGE
kubeless-controller-58676964bb-l79gh 1/1 Running 0 5d
```

Kubeless provide several [PubSub runtimes](https://hub.docker.com/r/kubeless/),which has suffix `event-consumer`, specified for languages that help you to quickly deploy your function with PubSub mechanism. Those runtimes are configured to read Kafka configuration at two environment variables:

- KUBELESS_KAFKA_SVC: which points to kafka service name in Kubernetes cluster.
- KUBELESS_KAFKA_NAMESPACE: which declares the namespace that Kafka is running on.

In this example, when deploying function we will declare two environment variables `KUBELESS_KAFKA_SVC=kafka` and `KUBELESS_KAFKA_NAMESPACE=pubsub`.

We now try to deploy a provided function in `examples` folder with command as below:

```
$ kubeless function deploy pubsub-python --trigger-topic s3-python --runtime python2.7 --handler pubsub.handler --from-file examples/python/pubsub.py --env KUBELESS_KAFKA_SVC=kafka --env KUBELESS_KAFKA_NAMESPACE=pubsub
```

The `pubsub-python` function will just print out messages it receive from `s3-python` topic. Checking if the function is up and running:

```
$ kubectl get po
NAME READY STATUS RESTARTS AGE
pubsub-python-5445bdcb64-48bv2 1/1 Running 0 4s
```

Now we need to create `s3-python` topic and try to publish some messages. You can do it on your own kafka client. In this example, I will try to use the bundled binaries in the kafka container:

```
# create s3-python topic
$ kubectl -n pubsub exec -it kafka-0 -- /opt/bitnami/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper.pubsub:2181 --replication-factor 1 --partitions 1 --topic s3-python
# send test message to s3-python topic
$ kubectl -n pubsub exec -it kafka-0 -- /opt/bitnami/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic s3-python
> hello world
```

Open another terminal and check for the pubsub function log to see if it receives the message:

```
$ kubectl logs -f pubsub-python-5445bdcb64-48bv2
hello world
```

0 comments on commit 52f1cee

Please sign in to comment.