Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod and Service cidrs must be passed on all masters (not just the 1st one) #52

Closed
fredleger opened this issue May 1, 2021 · 4 comments · Fixed by #54
Closed

Pod and Service cidrs must be passed on all masters (not just the 1st one) #52

fredleger opened this issue May 1, 2021 · 4 comments · Fixed by #54

Comments

@fredleger
Copy link

Unless i totally missed something, if you try to change svc cidr the cluster-dns will change too and have to be passed to all masters and not just only the first one.

this is already achievable through the individual flags but will be easier if handled the same way has pods and svc cidrs

tell me if that make sense to you i can work on the PR

thanks for this great module btw

@xunleii
Copy link
Owner

xunleii commented May 1, 2021

@fredleger thanks for your issue.

I've tested your issue and providing the --cluster-dns on the first master seems to work on my side... but --cluster-cidr and --service-cidr must be passed to all masters 😅 (if you create a service with the API of another node than the first one, it will be created with the default service CIDR). Therefore, it must be fixed ASAP (or else it will break multi master clusters if we use a LB in front of the kube API).

To test that, I followed the following procedure (the patch is available on this gist):

cd $(mktemp -d)
git clone https://github.com/xunleii/terraform-module-k3s
cd terraform-module-k3s
curl https://gist.githubusercontent.com/xunleii/d269954722a993254f2af0a56a9bd2a2/raw/720abccf806fb82ce9a26c003ae2d628a3025590/git.patch | git apply -

cd examples/hcloud-k3s
ssh-keygen -f issue52.id_rsa
ssh-add issue52.id_rsa
terraform init
terraform apply -var "ssh_key=$(cat issue52.id_rsa.pub)"

ssh root@$(hcloud server ip k3s-control-plane-1)
(k3s-control-plane-1)> curl -sL https://github.com/vmware-tanzu/sonobuoy/releases/download/v0.50.0/sonobuoy_0.50.0_linux_amd64.tar.gz | tar -xzf -
(k3s-control-plane-1)> ./sonobuoy run --kubeconfig /etc/rancher/k3s/k3s.yaml --wait

The sonobuoy tests are really long to be executed, so I will comment this issue with the result when it is completed.

@xunleii
Copy link
Owner

xunleii commented May 1, 2021

E2E sonobuoy tests successfully passed (the full dump is available in the gist)

(k3s-control-plane-1)> ./sonobuoy e2e 202105011025_sonobuoy_08a3b931-12a3-4a71-9260-89bc3ee5f1e7.tar.gz 
failed tests: 0

I think, the CIDR issue (currently passed only to the first master) must be fixed, but passing --cluster-dns to all masters is not required.

What do you think about that ? Have you seen something unusual when you provide the flag to the first master only ?

@fredleger
Copy link
Author

thanks for testing that deeply

Ok i think i ran in the issue you are mentioning where my core-dns had a 10.43.x.X addr. And now that i pass both cidrs to all masters it seems to work.

I have other issues that seems unrelated.

The fact that made me think about it is that the rancher docs are not very clear about which flags goes were.

@fredleger
Copy link
Author

i had time to test it a little deeper today

You are totally right on both points:

  • cluster-dns can be passed only on 1st node
  • cluster-cidr and service-cidr must be passed on all masters (it was my main issue in fact)

so i will try to propose a PR soon on it ;-)

@fredleger fredleger changed the title --cluster-dns has to be passed to all masters Pod and Service cidrs must be passed on all masters (not just the 1st one) May 3, 2021
xunleii added a commit that referenced this issue Jun 23, 2021
xunleii added a commit that referenced this issue Jun 23, 2021
fix #52

Signed-off-by: Alexandre Nicolaie dit Clairville <alexandre.nicolaie@gmail.com>
@xunleii xunleii linked a pull request Jun 23, 2021 that will close this issue
xunleii added a commit that referenced this issue Jun 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants