Skip to content

blubberblah/k8s-scaling-demo

 
 

Repository files navigation

Edit: 05/02/2023

Notes to get this running with recent version of kubernetes and charts

needed, or installation of ./charts/rabbitmq-sample-app will faile due to missing image docker build . --no-cache -t theryanbaker/ryantest:latest

publish messages: kubectl run publish -it --rm --image=theryanbaker/ryantest:latest --image-pull-policy=Never --restart=Never publish 50

prometheus query to get rabbitmq maetrics rabbitmq_queue_messages{app_kubernetes_io_instance="rabbitmq-server-scaling-demo",namespace="rabbitmq-scaling-demo"}

kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/rabbitmq-scaling-demo/pods/*/rabbitmq_queue_messages
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/rabbitmq-scaling-demo/pods/rabbitmq-server-scaling-demo-0/rabbitmq_queue_messages

Introduction

This is a demo deployment which will illustrate leveraging a Kubernetes custom metric to scale pods based on the depth of a RabbitMQ pod. It leverages RabbitMQ-Server, Prometheus, Prometheus Adapter, and a sample python worker and publisher script.

This repo is meant to go along with my blog post.

Requirements

Everything you need to deploy this demo is located within this repo. However, there are a few requirements you will need in order to get it working:

  1. A Kubernetes Cluster with kubectl setup(minikube will be fine)
  2. Helm deployed to the K8S cluster and a local helm client (Instructions here)
  3. Support for HPA version v2beta2 (kubectl get apiservices | grep "autoscaling")

Deployment

To make this as easy as possible, I have included a deploy.sh script, which will deploy all the helm charts that are needed, as well as deploying a sample consumer of messages.

To get started, clone the repo the repo locally, then run the deploy script

./deploy.sh

This will deploy the RabbitMQ, Prometheus, Prometheus Adapter, and a sample RabbitMQ python application..

Playing with the example

If you are interested in an explanation of how all the components work together, it would be best to check out my blog post.

If you just want to published messages to RabbitMQ and see the HPA scale the number of worker pods, just run the following command:

kubectl run publish -it --rm --image=theryanbaker/rabbitmq-scaling-demo --restart=Never publish 500

The HPA is set to scale at 100 messages per pod, so make sure you publish at least more than 100 (plus some additional give the HPA time to scale)

Cleanup

Once you are done testing, you can run the destroy.sh script to clean up all the provisioned resources. Make sure to delete the namespace from Kubernetes once you are completed done to delete the PVC as well.

About

Get this running on docker desktop / WSL2

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 51.0%
  • Python 49.0%