-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changing storageClassName in CR does not change the PVC resource #39
Comments
This bug is found automatically by Acto. In one of the mutations, Acto changed the CR's |
@tylergu I have a naive question -- why it has to reconcile the name? In other words, why is not reconciliating the name a bug (beyond looking good)? |
@tianyin , the operator should not only reconcile the name, but also change the type of persistence volume it uses. Kubernetes provides a concept called Here, the rabbitmq-operator should actually restart the Persistent Volumn Claim and migrate the data over, since we as the user changed the |
@tylergu did you report it? |
I added a possible fix and issued: rabbitmq/cluster-operator#992 |
nice! |
I think the developers confirmed this bug, and they think it's an interesting(hard) feature to implement. There are other operators which actually support changing storage class on the fly: https://www.percona.com/blog/2021/04/20/change-storage-class-on-kubernetes-on-the-fly/ |
This is a priceless point! |
Describe the bug
I was trying to change the
persistence/storageClassName
field in my rabbitmq-cluster's CR, but changing thepersistence/storageClassName
has no effect on the PVC used by the statefulSet.The
persistence/storageClassName
field was initially not specified so the operator used the default storage class "standard". Then I created a new storage class following instruction here https://github.com/rabbitmq/cluster-operator/blob/main/docs/examples/production-ready/ssd-gke.yaml, and changedpersistence/storageClassName
fromnull
tossd
. This change failed silently.To reproduce
Steps to reproduce this behavior:
kubectl apply -f https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml
kubectl apply -f https://github.com/rabbitmq/cluster-operator/blob/main/docs/examples/production-ready/ssd-gke.yaml
persistence/storageClassName
tossd
and applyVersion and environment information
Addition context
This bug is caused because the operator only reconciles the PVC's storage capacity, but does not reconcile the storageClassName here: https://github.com/rabbitmq/cluster-operator/blob/d657ffb516f948aaffd252794e3ed5e75e352d3d/controllers/reconcile_persistence.go#L15.
Possible fix is to create the desired storage type and migrate the data over. Or report an error message like PVC scaling down here: https://github.com/rabbitmq/cluster-operator/blob/d657ffb516f948aaffd252794e3ed5e75e352d3d/internal/scaling/scaling.go#L51
The text was updated successfully, but these errors were encountered: