-
Notifications
You must be signed in to change notification settings - Fork 16.8k
BUG: rabbitmq-ha chart fails to deploy on K8S 1.9.4 due to ConfigMaps now being mounted RO #4166
Comments
I've worked around this issue by using a busybox initContainer on the StatefulSet with a command to copy the files from the ConfigMap to an emptyDir volume. I'm not sure if this is the right way to go, but I'd be happy to submit a PR. |
Think this is the same as #4261 |
For the ones interested, the MR is on going by @svmaris, thanks to him! Here is the reference: #4169. It worked for me by copying its changes. As mentioned @etiennetremel do not forget to do:
Otherwise you will get this error: If you already have done a This solution is not appropriate if you are in production and if you have some data in these volumes (data you do not wanna lose). I think another solution would be to mount the PV in a pod, check the cookie on it and perform the upgrade as mentioned previously... |
BUG REPORT
Version of Helm and Kubernetes:
Helm v2.8.1 / K8S v1.9.4-gke.1
Which chart:
rabbitmq-ha
What happened:
First pod fails to start.
What you expected to happen:
/etc/rabbitmq/rabbitmq.conf is expected to mount with file permissions 0644, according to the yaml.
How to reproduce it (as minimally and precisely as possible):
helm install
to any default K8S 1.9.4 cluster.Anything else we need to know:
As of 1.9.4, ConfigMaps and Secrets are mounted RO. See the following for details:
kubernetes/kubernetes#58720
The text was updated successfully, but these errors were encountered: