Skip to content

Commit

Permalink
Fix router large scale scenario (#556)
Browse files Browse the repository at this point in the history
Currently we check for greater than or equal to 24 nodes
excluding workload node to execute the large scenario. This
means that if you have a 24 worker cluster, you will actually end up running
the small scale scenario since one node is excludded when counting total workers.

However, we still continue to use the workload node to place the backend nginx pods,
which makes our logic of determining small/large scale senario and labeling the workload node
counter-intuitive. The nginx backend pods are fighting for cpu time on the same node where mb
client is firing requests off of.

This patch does the following:
1. Counts the workload node also when determining number of worker nodes (still excludes infra)
2. Exlcudes the workload node from backend pod placement using node anti-affinity

This means that while we still will have the same number of backend pods (2000), the workload
node is ecluded from nginx pod placement, which should make results more deterministic as the
workload node exclusively hosts the mb client

Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>
  • Loading branch information
smalleni authored Apr 13, 2023
1 parent 5dfc160 commit f0c980d
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 6 deletions.
2 changes: 1 addition & 1 deletion workloads/router-perf-v2/env.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ export ES_SERVER=${ES_SERVER:-https://search-perfscale-dev-chmf5l4sh66lvxbnadi4b
export ES_INDEX=${ES_INDEX:-router-test-results}

# Environment setup
NUM_NODES=$(oc get node -l node-role.kubernetes.io/worker,node-role.kubernetes.io/workload!=,node-role.kubernetes.io/infra!= --no-headers | grep -cw Ready)
NUM_NODES=$(oc get node -l node-role.kubernetes.io/worker,node-role.kubernetes.io/infra!= --no-headers | grep -cw Ready)
LARGE_SCALE_THRESHOLD=${LARGE_SCALE_THRESHOLD:-24}
METADATA_COLLECTION=${METADATA_COLLECTION:-true}
KUBE_BURNER_RELEASE_URL=${KUBE_BURNER_RELEASE_URL:-https://github.com/cloud-bulldozer/kube-burner/releases/download/v0.16.2/kube-burner-0.16.2-Linux-x86_64.tar.gz}
Expand Down
4 changes: 0 additions & 4 deletions workloads/router-perf-v2/http-perf.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ jobs:
replicas: {{ .NUMBER_OF_ROUTES }}
inputVars:
deploymentReplicas: {{ .DEPLOYMENT_REPLICAS }}
nodeSelector: "{node-role.kubernetes.io/worker: }"

- objectTemplate: templates/http-service.yml
replicas: {{ .NUMBER_OF_ROUTES }}
Expand Down Expand Up @@ -49,7 +48,6 @@ jobs:
replicas: {{ .NUMBER_OF_ROUTES }}
inputVars:
deploymentReplicas: {{ .DEPLOYMENT_REPLICAS }}
nodeSelector: "{node-role.kubernetes.io/worker: }"

- objectTemplate: templates/http-service.yml
replicas: {{ .NUMBER_OF_ROUTES }}
Expand Down Expand Up @@ -79,7 +77,6 @@ jobs:
replicas: {{ .NUMBER_OF_ROUTES }}
inputVars:
deploymentReplicas: {{ .DEPLOYMENT_REPLICAS }}
nodeSelector: "{node-role.kubernetes.io/worker: }"

- objectTemplate: templates/https-service.yml
replicas: {{ .NUMBER_OF_ROUTES }}
Expand Down Expand Up @@ -109,7 +106,6 @@ jobs:
replicas: {{ .NUMBER_OF_ROUTES }}
inputVars:
deploymentReplicas: {{ .DEPLOYMENT_REPLICAS }}
nodeSelector: "{node-role.kubernetes.io/worker: }"

- objectTemplate: templates/https-service.yml
replicas: {{ .NUMBER_OF_ROUTES }}
Expand Down
10 changes: 9 additions & 1 deletion workloads/router-perf-v2/templates/nginx-deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,15 @@ spec:
labels:
app: nginx-{{.Replica}}
spec:
nodeSelector: {{.nodeSelector}}
af***REMOVED***nity:
nodeAf***REMOVED***nity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/workload
operator: DoesNotExist
- key: node-role.kubernetes.io/infra
operator: DoesNotExist
containers:
- name: nginx
image: quay.io/cloud-bulldozer/nginx:latest
Expand Down

0 comments on commit f0c980d

Please sign in to comment.