Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add nodelocal DNS option #1050

Closed
superseb opened this issue Dec 4, 2018 · 6 comments
Closed

Add nodelocal DNS option #1050

superseb opened this issue Dec 4, 2018 · 6 comments

Comments

@superseb
Copy link
Contributor

superseb commented Dec 4, 2018

Next to kubedns and CoreDNS itself, support setting nodelocal option as released in kubernetes/kubernetes#70555 for v1.13

@vikas027
Copy link

vikas027 commented Jan 1, 2019

Hey @superseb ,

Is there a workaround by tweaking a file or configMap for the time being?

My use case is that we use dnsmasq on the host as well to resolve internal domains by pointing to specific nameservers. In order to do that, I need to have IP 172.17.0.1 (default) in /etc/resolv.conf of the docker container.

@deniseschannon
Copy link

We should add this as an option in k8s 1.15-k8s 1.18 templates for when we release 2.4 but not turned on by default.

@deniseschannon
Copy link

Available with v1.1.0-rc10

@superseb
Copy link
Contributor Author

superseb commented Mar 5, 2020

Can be tested using:

dns:
  provider: coredns
  nodelocal:
    ip_address: "169.254.20.10"

Things to test:

  • --cluster-dns on the kubelet container should be set to this address
  • internal and external dns resolution should work

@jiaqiluo
Copy link
Member

jiaqiluo commented Mar 5, 2020

the enhancement is validated with RKE version v1.1.0-rc10

>./rke -v
rke version v1.1.0-rc10

Use RKE to provision a cluster with the following yml file

nodes:
  - address: 
    internal_address: 
    user: ubuntu
    role: [etcd, controlplane, worker]
    ssh_key_path:  
  - address: 
    internal_address: 
    user: ubuntu
    role: [etcd, controlplane, worker]
    ssh_key_path:  
  - address:  
    internal_address: 
    user: ubuntu
    role: [etcd, controlplane, worker]
    ssh_key_path:  

dns:
  provider: coredns
  nodelocal:
    ip_address: "169.254.20.10"

Do the following checks on the cluster:

  • confirm that a daemonset named node-local-dns is deployed in the cluster
> k get daemonsets.apps -n kube-system
NAME             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
canal            3         3         3       3            3           <none>          20m
node-local-dns   3         3         3       3            3           <none>          20m
  • confirm the --cluster-dns on the kubelet container should be set to designated address (need to ssh into nodes)
ubuntu@ip-172-31-25-27:~$ docker inspect kubelet | grep -e  'cluster-dns'
            "--cluster-dns=169.254.20.10",
                "--cluster-dns=169.254.20.10",

@jiaqiluo
Copy link
Member

This is also validated in rancher:master-head 8ede17d10

k8s version:

  • v1.17.3-rancher1-2
  • v1.16.7-rancher1-2
  • v1.15.10-rancher1-2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants