-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DNS resolution of hosNetwork
pods (e.g. Restric Backup Addon)
#1178
Labels
kind/bug
Categorizes issue or PR as related to a bug.
Comments
@toschneck Can you try setting If that solves the problem, we can create a PR to add this to the manifest. |
will try it and let you know |
@xmudrii it seams to work:
|
toschneck
added a commit
that referenced
this issue
Dec 1, 2020
Fix restrict backup addon according to: #1178 (comment) and https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
kubermatic-bot
pushed a commit
that referenced
this issue
Dec 1, 2020
Fix restrict backup addon according to: #1178 (comment) and https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What happened:
As using kubeone as seed cluster provisioner (on vSphere), we applied the restrict addon with the target to use the in cluster minio service
minio.minio.svc.cluster.local
. Unfortunately this didn't worked, because the in cluster DNS name didn't get resolved.backup job yaml
Due to some debugging I find out that the
/etc/resolv.conf
of the job pod didn't contain the search domains. After some research, it seams that using thehostNetwork: true
could somehow cause that DNS resolution doesn't work. Maybe also the combination with flannel is a issue. Some ref. upstream issues:backup pod, without searchdomain:
normal POD
As I don't think this a normal behavior, we should may investigate the DNS Resolution issue.
What is the expected behavior:
How to reproduce the issue:
hostNetwork=true
Anything else we need to know?
Issue happend in two different environments on vsphere. At my Lab setup (https://github.com/kubermatic-labs/kubermatic-demo/tree/master/vsphere) + customer
Information about the environment:
KubeOne version (
kubeone version
):Operating system: ubuntu
Provider you're deploying cluster on: vsphere
Operating system you're deploying on: ubuntu 18.04
workaround
For the backup location itself a the service IP of the minio svc could be used:
kubectl get svc -n minio
. Unfortunately this only as long stable as the service won't get redeployedThe text was updated successfully, but these errors were encountered: