-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFE] Capture ARCH details in multiarch network perf test scneario #151
Comments
Please open against the GoCommons repo. This seems relevant across the toolset. cc @rsevilla87 @vishnuchalla @chentex |
correct, this data should be already available |
@SachinNinganure going to close this out unless you see we are missing this? |
I have tried to look at the logs of the network-perf test run on the multi-arch nodes. but did not see the additional worker info in the logs https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/pr-logs/pull/openshift_release/56028/rehearse-56028-pull-ci-openshift-qe-ocp-qe-perfscale-ci-main-aws-4.17-nightly-multi-data-path-9nodes/1833129570308460544/artifacts/data-path-9nodes/openshift-qe-network-perf/build-log.txt Hence I am not sure if I am looking the info at the right place. Please let know if this must be closed and I am checking incorrectly. |
@krishvoor Could you please check on this |
it is collecting Master and Worker node info but the additional-workers |
@jtaleric Sachin is attempting k8s-netperf on a multi-arch worker [both ARM & x86_64] node cluster setup. |
I've realized that the metadata collection performed by the Fortunately, k8s-netperf also grabs and indexes the labels from the nodes where the client/server pods run, among these indexed labels you can find |
I've made some small modifications to the first table of the dashboard to show this information, find it in the last columns -> https://grafana.rdu2.scalelab.redhat.com:3000/d/wINGhybVz/k8s-netperf?orgId=1&var-datasource=abc72863-3b49-47d5-98d1-357a9559afea&var-platform=BareMetal&var-platform=AWS&var-workerNodesType=All&var-uuid=3df0431d-6e8c-4116-bb46-8a6ed01327a8&var-hostNetwork=&var-service=All&var-parallelism=All&var-throughput_profile=All&var-latency_profile=All&var-messageSize=All&var-driver=netperf&from=1726042211403&to=1726128611404&tab=transform&viewPanel=83 |
Thanks for the insights @rsevilla87
Guess this isn't the case across other tools (ingress-perf/kube-burner)? |
IMHO we cannot rely on the labels as we have found with CNV it creates an explosion of labels on nodes. We need to be specific in what we collect. I would recommend someone open a PR to add arch information on the node the server and client landon -- not all of the nodes -- that is useless because only the server and the client are important to us. wdyt? |
To capture the Arch details in case of the multiarch scenarios
AMD and X86
test link for instance
https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_release/56028/rehearse-56028-pull-ci-openshift-qe-ocp-qe-perfscale-ci-main-aws-4.17-nightly-multi-data-path-9nodes/1833129570308460544
The text was updated successfully, but these errors were encountered: