Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Persist dns configuration on upgrade #1954

Closed
poidag-zz opened this issue Dec 3, 2019 · 12 comments · Fixed by kubernetes/kubernetes#85837
Closed

Persist dns configuration on upgrade #1954

poidag-zz opened this issue Dec 3, 2019 · 12 comments · Fixed by kubernetes/kubernetes#85837
Assignees
Labels
kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@poidag-zz
Copy link

poidag-zz commented Dec 3, 2019

EDIT: this issue is tracking the potential to persist configuration options from the old coredns deployment after upgrade.

this PR made it possible to persist the replica count:
kubernetes/kubernetes#85837


What keywords did you search in kubeadm issues before filing this one? kubeadm dns coredns replicas

Is this a BUG REPORT or FEATURE REQUEST?

FEATURE REQUEST

If this is a FEATURE REQUEST, please:

  • Allow specifying a configuration to set CoreDNS replicas when initializing a cluster

Versions

kubeadm version (use kubeadm version):

Environment:

  • Kubernetes version (use kubectl version): 1.16.3
  • Cloud provider or hardware configuration: n/a
  • OS (e.g. from /etc/os-release): n/a
  • Kernel (e.g. uname -a): n/a

What happened?

After cluster creation with the default replica count set. Scaling the coredns replicas up and then performing upgrade caused replicas to be reset to default

What you expected to happen?

dns replicas set back to kubeadm default

How to reproduce it (as minimally and precisely as possible)?

after a default kubeadm init

using the following command

kubectl -n kube-system scale --replicas=15 deployment.apps/coredns

and then performing an upgrade

kubeadm upgrade apply

resets the replicas back to the default. Along with this persistence during upgrades. It would also be nice to have this configurable during init.

@chrisohaver
Copy link

What you expected to happen?
dns replicas set back to kubeadm default

IIUC, you expect the opposite: dns replicas not set back to kubeadm default.

Regarding persistence through upgrades: I think that if the coredns deployment yaml used by kubeadm did not contain a replica count, the number of replicas would not be reset when upgrading (assuming kubeadm does the equivalent of kubectl apply).

@neolit123
Copy link
Member

@pickledrick as i've mentioned on the PR, v1beta2 is locked for new features.
this is the tracking issue for v1beta3 #1796

@neolit123 neolit123 added kind/feature Categorizes issue or PR as related to a new feature. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Dec 3, 2019
@neolit123 neolit123 added this to the v1.18 milestone Dec 3, 2019
@neolit123
Copy link
Member

also @ereslibre is working on a document related to how to handle coredns Deployment issues.

@neolit123
Copy link
Member

instead of modifying the API i proposed to @pickledrick to allow upgrades to keep the existing replica count instead of redeploying with 2.

cc @rajansandeep @chrisohaver WDYT?

@chrisohaver
Copy link

... allow upgrades to keep the existing replica count instead of redeploying with 2

@neolit123, sounds fine to me. One very simple way of doing this could be to remove the replica count line from the coredns Deployment yaml.

@rosti
Copy link

rosti commented Dec 5, 2019

Commenting here instead of the PR. I am a bit opposed to having a replica count field in the kubeadm config. This is specific to the DNS deployment and not to kubeadm itself.
Also, users who are used to using kubectl to scale the deployment will have to update the ClusterConfiguration too.

What we should do itself is to base our updated deployment on the one currently in use in the cluster (in effect patch it).
That way we won't touch the replicas, annotations, labels or anything else, that users have modified in their DNS deployments.

@ereslibre
Copy link
Contributor

I am a bit opposed to having a replica count field in the kubeadm config. This is specific to the DNS deployment and not to kubeadm itself.

I also think we shoud not include the replica count in the kubeadm config.

What we should do itself is to base our updated deployment on the one currently in use in the cluster (in effect patch it).
That way we won't touch the replicas, annotations, labels or anything else, that users have modified in their DNS deployments.

Yes, and I'll follow up with the document about that. There are slightly different approaches, but all of them should patch and not override existing deployment settings. Didn't have the time yet but we should be able to do something along these lines.

@poidag-zz poidag-zz changed the title Allow configurable dns replicas from init Persist dns configuration on upgrade Dec 6, 2019
@poidag-zz
Copy link
Author

Hi Everyone,

Thanks for the comments, I have taken another go at this given the feedback from the related PR.

The idea is to check if there is an existing DNS deployment and don't touch it if it exists (unless switching between kube-dns and CoreDNS types) without making changes to the API.

@neolit123 neolit123 added lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Dec 8, 2019
@neolit123
Copy link
Member

keeping this issue open to track the potential persisting of more fields than just the replica count:
#1943 (comment)

/assign @rajansandeep @ereslibre

@neolit123 neolit123 reopened this Dec 12, 2019
@neolit123 neolit123 added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Dec 12, 2019
@neolit123 neolit123 added the kind/design Categorizes issue or PR as related to design. label Jan 22, 2020
@neolit123 neolit123 modified the milestones: v1.18, v1.19 Mar 8, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 6, 2020
@neolit123
Copy link
Member

let's log separate tickets for extra options that should persists.
/close

@k8s-ci-robot
Copy link
Contributor

@neolit123: Closing this issue.

In response to this:

let's log separate tickets for extra options that should persists.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants