-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add commonLabels value #3438
add commonLabels value #3438
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks reasonable to me, thanks for the PR!
You will need to force-push a correction though. The subject of the commit should have a topic, and that is missing at the moment. I recommend using
deploy: add commonLabels value
or similar. The commit message itself can also have a few more details, basically copy/paste from the PR description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add some test labels here https://github.com/ceph/ceph-csi/blob/devel/scripts/install-helm.sh#L172-L182 so that this will be get tested in CI
f88a3f1
to
cdeae99
Compare
@Madhu-1 Are the tests suitable for you? |
scripts/install-helm.sh
Outdated
@@ -179,7 +179,7 @@ install_cephcsi_helm_charts() { | |||
kubectl_retry delete cm ceph-config --namespace ${NAMESPACE} | |||
|
|||
# shellcheck disable=SC2086 | |||
"${HELM}" install --namespace ${NAMESPACE} --set provisioner.fullnameOverride=csi-rbdplugin-provisioner --set nodeplugin.fullnameOverride=csi-rbdplugin --set configMapName=ceph-csi-config --set provisioner.replicaCount=1 ${SET_SC_TEMPLATE_VALUES} ${RBD_SECRET_TEMPLATE_VALUES} ${RBD_CHART_NAME} "${SCRIPT_DIR}"/../charts/ceph-csi-rbd --set topology.enabled=true --set topology.domainLabels="{${NODE_LABEL_REGION},${NODE_LABEL_ZONE}}" --set provisioner.maxSnapshotsOnImage=3 --set provisioner.minSnapshotsOnImage=2 | |||
"${HELM}" install --namespace ${NAMESPACE} --set provisioner.fullnameOverride=csi-rbdplugin-provisioner --set nodeplugin.fullnameOverride=csi-rbdplugin --set configMapName=ceph-csi-config --set provisioner.replicaCount=1 --set-json='commonLabels={"test": "test", "un": "deux"}' ${SET_SC_TEMPLATE_VALUES} ${RBD_SECRET_TEMPLATE_VALUES} ${RBD_CHART_NAME} "${SCRIPT_DIR}"/../charts/ceph-csi-rbd --set topology.enabled=true --set topology.domainLabels="{${NODE_LABEL_REGION},${NODE_LABEL_ZONE}}" --set provisioner.maxSnapshotsOnImage=3 --set provisioner.minSnapshotsOnImage=2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, can you please add some meaningful names (choose whatever you like something specific to cephcsi)
--set-json='commonLabels={"component": "ceph", "sub-component": "rbd"}'
6322356
to
097f875
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changes LGTM, please squash the commits.
scripts/install-helm.sh
Outdated
@@ -179,7 +179,7 @@ install_cephcsi_helm_charts() { | |||
kubectl_retry delete cm ceph-config --namespace ${NAMESPACE} | |||
|
|||
# shellcheck disable=SC2086 | |||
"${HELM}" install --namespace ${NAMESPACE} --set provisioner.fullnameOverride=csi-rbdplugin-provisioner --set nodeplugin.fullnameOverride=csi-rbdplugin --set configMapName=ceph-csi-config --set provisioner.replicaCount=1 ${SET_SC_TEMPLATE_VALUES} ${RBD_SECRET_TEMPLATE_VALUES} ${RBD_CHART_NAME} "${SCRIPT_DIR}"/../charts/ceph-csi-rbd --set topology.enabled=true --set topology.domainLabels="{${NODE_LABEL_REGION},${NODE_LABEL_ZONE}}" --set provisioner.maxSnapshotsOnImage=3 --set provisioner.minSnapshotsOnImage=2 | |||
"${HELM}" install --namespace ${NAMESPACE} --set provisioner.fullnameOverride=csi-rbdplugin-provisioner --set nodeplugin.fullnameOverride=csi-rbdplugin --set configMapName=ceph-csi-config --set provisioner.replicaCount=1 --set-json='commonLabels={"app.kubernetes.io/name": "ceph-csi-rdb", "app.kubernetes.io/managed-by": "helm"}' ${SET_SC_TEMPLATE_VALUES} ${RBD_SECRET_TEMPLATE_VALUES} ${RBD_CHART_NAME} "${SCRIPT_DIR}"/../charts/ceph-csi-rbd --set topology.enabled=true --set topology.domainLabels="{${NODE_LABEL_REGION},${NODE_LABEL_ZONE}}" --set provisioner.maxSnapshotsOnImage=3 --set provisioner.minSnapshotsOnImage=2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change below to "app.kubernetes.io/name": "ceph-csi-rdb"
to "app.kubernetes.io/name": "ceph-csi-rbd"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Normally you can do this directly during the merge @Madhu-1 |
@bastienbosser Thanks ! |
4f0da28
to
8a9d6a9
Compare
Done |
After other PRs are done with their CI runs, this can be rebased by Mergify and de |
@Mergifyio rebase |
✅ Branch has been successfully rebased |
8a9d6a9
to
ca00435
Compare
Yes, because of the missing flag, deployment was not successful and csi pods are not running as expected. Can you please use some other existing flag to set it or we need to update the helm version in the build.env? |
ca00435
to
7fb0387
Compare
Pull request has been modified.
I changed the helm version in the build.env file to make the tests work. |
How this value can be set if someone is using an older helm version? Do you have the sample command? |
/test ci/centos/mini-e2e/k8s-1.25 |
1 similar comment
/test ci/centos/mini-e2e/k8s-1.25 |
Seems to run well now! |
@Mergifyio rebase |
Signed-off-by: BOSSER, Bastien <bastien.bosser@atos.net>
✅ Branch has been successfully rebased |
7fb0387
to
e690c63
Compare
/test ci/centos/k8s-e2e-external-storage/1.23 |
/test ci/centos/k8s-e2e-external-storage/1.24 |
/test ci/centos/k8s-e2e-external-storage/1.25 |
/test ci/centos/mini-e2e-helm/k8s-1.23 |
/test ci/centos/mini-e2e-helm/k8s-1.24 |
/test ci/centos/mini-e2e-helm/k8s-1.25 |
/test ci/centos/mini-e2e/k8s-1.23 |
/test ci/centos/mini-e2e/k8s-1.24 |
/test ci/centos/mini-e2e/k8s-1.25 |
/test ci/centos/upgrade-tests-cephfs |
/test ci/centos/upgrade-tests-rbd |
Signed-off-by: BOSSER, Bastien bastien.bosser@atos.net
Describe what this PR does
Add a commonLabels variable in the values.yml file that allows to add additional labels on all helm chart resources.
As you know, labels can be used to organize and to select subsets of objects. In my case of use it allows me to group the applications by category and to check by a simple command the status of it.
As examples, for all security applications, I put a label "my-cluster-name/application_group: security". Then I just have to make the command "kubectl get all -A -l my-cluster-name/application_group=security" to know the status of all applications that provide security.
Related issues
Fixes: #3437