These are the steps required for setting up and running the demos.
Get the required helm repositories with the commands below:
helm repo add kedacore https://kedacore.github.io/charts
helm repo add podinfo https://stefanprodan.github.io/podinfo
Update the Helm Repos
helm repo update
Create a namespace and install the Keda Helm chart in it
kubectl create namespace keda-demo
helm install keda kedacore/keda --namespace keda-demo --version 2.3.2
Install the PodInfo helm chart in the same namespace as Keda. You want to use the --set
flag to enable the ServiceMonitor
. If the prometheus-operator is configured correctly then the ServiceMonitor
will be detected and Prometheus will be configured to scape metrics from the PodInfo workload(s).
helm install podinfo --namespace keda-demo podinfo/podinfo --version 5.2.1 --set serviceMonitor.enabled=true
After installation, go to the /prometheus/targets
endpoint for you prometheus instance and confirm that the PodInfo workload is a target that is successfully being scraped.
Deploy a Deployment
object keda-demo
namespace.
kubectl apply -n keda-demo -f examples/deployments/example-workload.yaml
- Have the prometheus-operator installed and configured to look for
ServiceMonitor
s in thekeda-demo
namespace. You can use the community helm chart to install it - Optional, have Ambassador's Telepresence v2 installed
-
Update the
serverAddress
in theexamples/keda/prom-scaledobject.yaml
file.- If you want to access it interally, you can use the format
http://SVC_NAME.NAMESPACE.svc.cluster.local:9090
. If your prometheus instance is served on a subpath you must attach the subpath after the port number (e.g.http://SVC_NAME.NAMESPACE.svc.cluster.local:9090/prometheus
)
- If you want to access it interally, you can use the format
-
Deploy the
ScaledObject
with the prometheus triggerkubectl apply -f examples/keda/prom-scaledobject.yaml
-
Check the scaled object is ready with
kubectl get scaledobject -n keda-demo
-
If it is not ready, check the logs of the keda pod (
keda logs POD_NAME -n keda-demo
). It'll most likely be the server address being incorrect. -
Check and examine the HPA object created by KEDA with
kubectl get hpa -n keda-demo
andkubectl describe hpa -n keda-demo
. -
Open another terminal and watch the pods in the
keda-demo
namespace withkubectl get pods -n keda-demo -w
. -
Open another terminal and watch the deployments in the
keda-demo
namespace withkubectl get deployments -n keda-demo -w
. -
Now we want to trigger the autoscaling. If you have telepresence, repeatedly run curl commands against the PodInfo service with
while :; do curl podinfo.keda-demo:9898; sleep 1; done
.- If you don't have telepresence you can exec into another pod with curl installed and run
curl podinfo.keda-demo.svc.cluster.local:9898/metrics
from within the context of that pod,
- If you don't have telepresence you can exec into another pod with curl installed and run
-
As you run the curl commands, you'll eventually notice the number of replicas of the consumer workload increase in the other terminals.
-
Stop running the curl commands.
-
Wait until the
Deployment
scales down back to the minimum number of replicas -
Clean up the resources.
kubectl delete -f examples/keda/prom-scaledobject.yaml
-
Deploy the redis
Deployment
andService
kubectl apply -f examples/deployments/redis.yaml kubectl apply -f examples/deployments/redis-svc.yaml
-
Confirm the redis pod is running with
kubectl get pods -n keda-demo
-
Confirm the redis service has successfully found the redis pod by checking an endpoint exists with
kubectl get endpoints -n keda-demo
-
Deploy the
ScaledObject
with the redis triggerkubectl apply -f examples/keda/redis-scaledobject.yaml
-
Check the scaled object is ready with
kubectl get scaledobject -n keda-demo
-
If it is not ready, check the logs of the keda pod (
keda logs POD_NAME -n keda-demo
). It'll most likely be the server address being incorrect. -
Check and examine the HPA object created by KEDA with
kubectl get hpa -n keda-demo
andkubectl describe hpa -n keda-demo
. -
Open another terminal and watch the pods in the
keda-demo
namespace withkubectl get pods -n keda-demo -w
-
Open another terminal and watch the deployments in the
keda-demo
namespace withkubectl get deployments -n keda-demo -w
-
Now we want to trigger the autoscaling. Exec into the redis pod with
kubectl exec redis-(RANDOM_STRING) -it -n keda-demo -- redis-cli
-
Check the length of the list stored in the
mylist
key withLLEN mylist
-
Add to the list with
LLPUSH mylist "string"
until the length of the list is above the threshold. -
One additional replica should have been created.
-
Remove items from the list with
LPOP mylist
until the length of the list is below the threshold. -
Wait until the
Deployment
scales down back to the minimum number of replicas. -
Clean up the resources.
kubectl delete -f examples/deployments/redis.yaml kubectl delete -f examples/deployments/redis-svc.yaml kubectl delete -f examples/keda/redis-scaledobject.yaml
-
Deploy the redis
Deployment
andService
kubectl apply -f examples/deployments/redis.yaml kubectl apply -f examples/deployments/redis-svc.yaml
-
Confirm the redis pod is running with
kubectl get pods -n keda-demo
-
Confirm the redis service has successfully found the redis pod by checking an endpoint exists with
kubectl get endpoints -n keda-demo
-
Deploy the
ScaledJob
with the redis trigger.kubectl apply -f examples/keda/redis-scaledjob.yaml
-
Check the scaled object is ready with
kubectl get scaledjob -n keda-demo
-
If it is not ready, check the logs of the keda pod (
keda logs POD_NAME -n keda-demo
). It'll most likely be the server address being incorrect. -
Open another terminal and watch the pods in the
keda-demo
namespace withkubectl get pods -n keda-demo -w
-
Open another terminal and watch the jobs in the
keda-demo
namespace withkubectl get jobs -n keda-demo -w
-
Now we want to trigger the autoscaling. Exec into the redis pod with
kubectl exec redis-(RANDOM_STRING) -it -n keda-demo -- redis-cli
-
Check the length of the list stored in the
myotherlist
key withLLEN myotherlist
-
Add to the list with
LLPUSH myotherlist "myotherlist"
until the length of the list is above the threshold. -
One additional replica should have been created.
-
Wait and observe as more jobs continously get created.
-
Remove the item from the list with
LPOP myotherlist
-
Wait and observe as no more jobs are created.
-
Clean up the resources.
kubectl delete -f examples/deployments/redis.yaml kubectl delete -f examples/deployments/redis-svc.yaml kubectl delete -f examples/keda/redis-scaledjob.yaml