-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: failed to download "https://10.43.0.1:443/static/charts/traefik-1.81.0.tgz" #1817
Comments
bump since i am also seeing it. Not installing Tiller due to 'client-only' flag having been set
k3s kubectl -n kube-system get pods After deleting the helm traefik install pod k3s kubectl -n kube-system delete pod helm-install-traefik-k4bld The traefik pods get installed k3s kubectl -n kube-system get pods |
I see a lot of I also see that you are getting SSL errors in other places when running curl commands - which is expected. The certificates used internally by kubernetes are self-signed and would not be trusted by curl unless you've taken steps to trust the k3s root CA. Additionally, the api server requires authentication, which your curl command does not supply. This command will still fail because your
|
@brandond I have two nodes both are server. I realized later on about my silly misdiagnosis of a cert validation issue here. Thanks for providing the right curl parameters to test it properly. Any more insighst to debug this connectivity issue? |
A 'no route to host' error from the apiserver endpoint is pretty odd. What distro and architecture are you on? Does |
also, those 2 nodes, are they running in the same network environment? Are they behind nat? I was hit recently by a bug (feature?) in flannel related to private/public addresses which would explain your issue. #1824 |
Is there a local firewall (ufw) or cloud provider firewall (security groups, etc) that might be blocking some traffic between nodes? |
@brandond no ufw. Also this is on-prem for us hence not cloud provider firewall either. For now we've added some retries to see if that helps fix the issue for us. |
I have the exact same error with a very simple configuration with one node only. k3s was installed on a CentOS Linux (7.8) with It seems to work for the main part, as it is possible to deploy pods, services, persistent volumes... Anyway, trafik is not working :
The logs in the "helm-install-traefik-xthfq" are the very same than the ones posted by samirsss above. |
Ran into the same issue and after some investigation this solved it for me: stopping the firewalld service confirmed the issue for me. I then had to allow the k3s subnet (10.42.x.x) access to the host to get things working while the firewalld service is running. |
I had the same issue on my freshly provisioned CentOS 8 box. The output of Even though my CentOS installation was completely new, there was a single thing that was different than on a vanilla installation of CentOS 8: For reasons unknown to me, our IT people insist on installing the package iptables-save_without-service.txt |
If
Shoutout to #566 (comment) to point me into this direction. I agree with the author of that comment that k3s should do this automatically. EDIT: Actually, I am not too sure anymore that this was the solution... |
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions. |
For posterity, I found this error but for different reasons I believe. I jacked up my cluster pretty good and ended up deleting all my server nodes and re-adding them, this was also an upgrade as well which I think may have contributed to my problems. After re-adding the last I was getting this error I was able to work around this by deleting the traefik helm deployment with the command I found here: #717 (comment) And then I restarted the k3s systemd unit and it recreated the traefik.yaml file (which was missing??) from the /manifests directory and successfully re-installed traefik for me. |
Version:
k3s version v1.18.2+k3s1 (698e444)
K3s arguments:
ExecStart=/usr/local/bin/k3s server --kube-controller-manager-arg pod-eviction-timeout=1m --disable local-storage,metrics-server --disable-cloud-controller --data-dir /var/lib/rancher/k3s --disable traefik --kube-apiserver-arg feature-gates="ServiceTopology=true,EndpointSlice=true" --datastore-endpoint postgres://postgres:postgres@10.177.205.14:5432/kubernetes
Describe the bug
During k3s service startup, we have a custom traefik manifest under /var/lib/rancher/k3s/server/manifest directory so that the helm job will install the custome traefik service. During the helm job, to install the custom traefik manifest, the retrieval of the traefik helm chart from the local kubernetes clusterIP failed. This appears to be due to an internal kubernetes certificate validation failure.
To Reproduce
It is hard to reproduce but comes up on occasion.
Expected behavior
Retrieval of the traefik helm chart from the local kubernetes cluster should be successful and not fail so that the custom traefik manifest can be installed.
Actual behavior
The retrieval of the traefik helm chart failed.
Additional context / logs
The text was updated successfully, but these errors were encountered: