-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
updating kube-dns to 1.14.5 because of CVE-2017-14491 #3512
Comments
EDIT: see headline for complete commands.
|
Note: @justinsb missed the sidecar which is included in the above command #3512 (comment) |
Automatic merge from submit-queue. Update kube-dns to 1.14.5 for CVE-2017-14491 As described: https://security.googleblog.com/2017/10/behind-masq-yet-more-dns-and-dhcp.html Not sure if it'd be possible to cut a new 1.7 release with this or something to give people a quick fix. Current work around would be to manually update the addons in s3. For those who may reference this, simply upgrading to 1.7.7 will not fix this in kops. ### Edit ~ @chrislovecnm Please see #3512 for more information on how to address these concerns with current kops releases. We are still in the process of testing this release of kube-dns, which is a very critical component of kubernetes.
PR is merged, and if testing goes well will be included in our next 1.8.0 alpha release |
Awesome. Thank you guys! |
Tested #3511 on a fresh aws cluster after jumping through some hoops. Testing spinning up a 1.7.7 cluster now for kicks. |
Tests above were done on current stable (1.7.2). Results the same as below minus obvious version numbers. Also verified, looks good on 1.7.7 (gotta update my kubectl):
Also verified on v1.8.0. Same results as above. |
My cluster which is v1.5.7 does not have the same kube-dns that is listed above. I don't have the same images in my kube-dns pod. Getting the deployment and dumping / grepping for image shows:
What is the upgrade path for a cluster that is < 1.6.x? |
I have asked on the dev list for a compatibility matrix and have not gotten an answer. I will need to look in kubernetes / kubernetes to see. With the other pod container names are you able to upgrade to 1.4.5 |
@snoby according to k8s 1.5.8 release (https://github.com/kubernetes/kubernetes/pull/53149/files) you need to update just
after that you will see dnsmasq updated version
I have checked other containers that are in |
Trying to collate this information into a file: #3534 |
From Aaron on the dev list. For k8s 1.5 and below. kubectl set image deployment/kube-dns dnsmasq=gcr.io/google_containers/k8s-dns-dnsmasq-amd64:1.14.5 --namespace=kube-system |
I followed this for version 1.5.5 Iam seeing that kube-dns pod keeps restarting . The kube events show the liveness check is failing
And once the pod stabilizes , if I see the dnsmasq version - it shows the old one This software comes with ABSOLUTELY NO WARRANTY. |
@varsharaja is it looking for the new config map? |
@varsharaja I edited your comment so I could read it better, can we get the previous logs from the container? |
@varsharaja I just setup a 1.5.7 cluster, ran your command and things look fine. What version of kops are using? It may be useful to get the rest of your kube-dns deployment if you don't mind.
|
I made a mistake on one of the clusters and conflated the 1.5 and the 1.6/1.7 instructions. If you apply the change, make double sure that |
@mikesplain : I havent used kops for the cluster upgrade. I tried changing only the dnsmasq container image with the kubectl command. This the default dns deployment in the 1.5.5. It says deployment image updated , I see the image getting pulled in events , the pod is created but then gets killed and it falls back to old image |
The spin up is within seconds .Let me try to get the logs from the syslog. Let me check on the configmap part as well |
@varsharaja container logs via kubectl please |
Here it is Looks like it is look for some config map but I dont have any configmaps for kube-system |
You need to create the config-map by hand. This was an issue upgrading to the newest version of kube-dns. |
@varsharaja see #2827 (comment) for instructions |
Specifically $ kubectl -n kube-system get configmap kube-dns If not, then create an empty one: $ kubectl create configmap -n kube-system kube-dns |
Automatic merge from submit-queue. Fix CVE for kube-dns pre k8s 1.6 Additional fix for #3512. Testing now
@chrislovecnm: Thanks for the pointer, fixed the configmap issue. However the pod failed with the liveness-health check the first time. I had to edit the skydns-rc.yaml file to set the image name. Things worked fine after this. |
@varsharaja did the kubectl command not work for you? |
@chrislovecnm : When I gave the kubectl command, I see the already running kubedns pod getting killed, and new one starting. This new pod is killed within 1 min due to liveness check failure and if skydns-rc.yaml points to old dnsmasq image , the new pod + 1 has the old image for dnsmasq. Once I make the change the rc file, the new image get loaded when i kill the kubedns pod. |
There probably is a typo in the command to update the image for 1.6. The command sets The image that was previously used was I'm not aware of the differences between the nanny and non-nanny images, but the nanny-version ( |
@meese can you pr the update to the docs or give me cli examples so I can update it pleas? |
@chrislovecnm The correct command is:
Sorry, my employer doesn't have the CLA accepted yet. I'll poke them again, so I can create a PR next time. |
Closing as we have notes and a new release |
Thanks @chrislovecnm. I saw this issue too. One of my colleagues did
|
Edit: see #3563 (comment)
Details
Due to https://security.googleblog.com/2017/10/behind-masq-yet-more-dns-and-dhcp.html
We have an upgraded version of kube-dns that needs to be PR'ed and tested.
If you do not want to wait for a kops release update your kubernetes cluster accordingly
Manual Update
Update the kube-dns deployment images via
Validate that the pods have deployed
kubectl -n kube-system get po
The text was updated successfully, but these errors were encountered: