Skip to content

eashi/keda-workshop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Welcome to the KEDA Workshop

In this workshop we will learn what KEDA is, how it works, what are the built-in scalers, and how to build an External scaler specific to our needs.

Pre-requisites

Part 1: Built-in Scalers

Install KEDA

There are several ways to install KEDA, the simplest one is to use the Helm charts.

  1. Add Helm repo

helm repo add kedacore https://kedacore.github.io/charts

  1. Update Helm repo

helm repo update

  1. Install KEDA Helm chart
kubectl create namespace keda
helm install keda kedacore/keda --namespace keda

For other options check KEDA's deployment documentation.

Follow the RabbitMQ Sample

Clone https://github.com/kedacore/sample-go-rabbitmq

Follow the instructions of the sample until you reach "Deploying a RabbitMQ consumer" so we can discuss the deploy/deploy-consumer.yaml file.

Part 2: External Scalers

Create a target deployment

This is the deployment that we target for scaling, it can be the deployment that consumes messages from a queue for example.

In our case it is just a simple web app sample that is provided by microsoft. We are not going to worry about exposing it with a Service; the purpose of the workshop is just to show how this deployment will scale up and down based on our External scaler.

To deploy the target app: kubectl apply -f my-scaler/yaml/target-deployment.yaml

Create The External Scaler

External scalers are containers that implement provide gRPC endpoints. So let's create a gRPC .NET Core app from the

  1. dotnet new grpc -n my-scaler
  2. Add the file externalscaler.proto from https://github.com/kedacore/keda/blob/main/pkg/scalers/externalscaler/externalscaler.proto to the folder my-scaler/Protos
  3. Include the file we just created in the gRPC code generation by adding the following line to the .csproj file:
<Protobuf Include="Protos\externalscaler.proto" GrpcServices="Server" />
  1. Run dotnet build to generate the base gRPC code
  2. Create a file ExternalScalerService.cs under Services folder, we will build it gradually together. To save time, you can copy the file from this repo if you want to jump to its final state.
  3. Add the following line in the Program.cs file:
app.MapGrpcService<ExternalScalerService>();

Note: If you're using previous .NET Core versions, instead you might need to add this to the Startup.cs file in the UseEndpoints section:

endpoints.MapGrpcService<ExternalScalerService>();
  1. Add the following line to th Program.cs file:
builder.Services.AddHttpClient();

Note: if you're using previous .NET Core versions, instead you might need to add this to the Startup.cs file in the ConfigureServices method

services.AddHttpClient();
  1. Create a Dockerfile file, and a .dockerignore file (it's important not to forget this one, you can copy the content from the repo)
  2. Build the image by running:
docker build . -t my-scaler-image

Feel free to choose the image name you like, however remember to use it all the way down after this point.

Create a Deployment and a Service in Kubernetes for our new scaler

Let's create a Deployment and a Service to run our scaler and service requests to. From the root of this repo copy the file my-scaler/yaml/my-scaler-deployment.yaml , and then run:

kubectl apply -f my-scaler-deployment.yaml

Note: You can use port forward to troubleshoot the gRPC service and use a tool like BloomRPC to detect if your service is working properly:

kubectl port-forward service/my-scaler-service 3333:80

Create a mock server

In this step we are going to create a fake http endpoint. Our scaler will query this endpoint, and it will return an integer that we will use as the fake criteria on which we are going to scale our deployment on.

In reality this might be a length of queue of a technology that does not have a built-in support in KEDA, or number of logged in users...etc.

  1. In this repo, open the file mock-server/mockserver-config/static/initializerJson.json and create a new endpoint that returns an integer in a string format. Let's call it fake.

  2. Create the namespace "mockserver":

kubectl create namespace mockserver
  1. Navigate to the folder mock-server and then run the following command to create a configmpa from which the mockserver will read the configuration.
helm upgrade --install --namespace mockserver mockserver-config mockserver-config
  1. Create a deployment to run the mockserver itself. run the following:
helm upgrade --install --namespace mockserver --set app.mountConfigMap=true --set app.mountedConfigMapName=mockserver-config --set app.propertiesFileNamem=mockserver.properties --set app.initializationJsonFileName=initializerJson.json mockserver mockserver
  1. If you want to change the configuration to experiment the scaling up and down, run the following command to restart the mockserve and force it to take the new config values:
helm upgrade --install --namespace mockserver mockserver-config mockserver-config

kubectl rollout restart deploy/mockserver -n mockserver

Create the ScaledObject

ScaledObject is the kubernetes resource (specific to KEDA) that will tell KEDA to scale our target deployment based on the configuration within. You copy the content of the file from my-scaler/yaml/scaled-config.yaml. And then run:

kubectl apply -f scaled-config.yaml 

If everything is setup right, and fake endpoint returns the right value, then watch your target deployment scaling out to many pods.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published