-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix ingress (also for multinode clusters) #13439
Conversation
/ok-to-test |
kvm2 driver with docker runtime
Times for minikube start: 45.5s 44.2s 44.4s 44.2s 43.7s Times for minikube ingress: 85.1s 28.6s 28.6s 25.5s 29.0s docker driver with docker runtime
Times for minikube start: 26.6s 26.6s 26.2s 25.8s 26.2s Times for minikube ingress: 22.9s 23.0s 22.9s 22.4s 23.9s docker driver with containerd runtime
Times for minikube ingress: 23.5s 54.4s 19.9s 20.0s 29.0s Times for minikube start: 30.4s 41.7s 45.2s 45.5s 45.5s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks excellent! Thanks for your hard work.
Thank you for the mention @prezha Looks really good 👍 |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alexbaeza, prezha, sharifelgamal The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
kvm2 driver with docker runtime
Times for minikube start: 52.9s 46.8s 53.5s 52.3s 52.9s Times for minikube (PR 13439) ingress: 29.6s 28.6s 29.6s 30.0s 28.5s docker driver with docker runtime
Times for minikube start: 42.3s 26.8s 26.5s 26.0s 26.5s Times for minikube ingress: 22.4s 21.9s 21.4s 22.4s 22.4s docker driver with containerd runtime
Times for minikube start: 44.1s 45.7s 42.3s 42.0s 41.4s Times for minikube (PR 13439) ingress: 23.4s 23.9s 23.5s 24.4s 33.4s |
These are the flake rates of all failed tests.
Too many tests failed - See test logs for more details. To see the flake rates of all tests by environment, click here. |
fixes #12903
fixes #13088
more specifically, with this pr we should:
--publish-status-address=localhost
to publish node's actual ip instead, havingis-default-class
set, additional ConfigMaps for tcp/udp-services, gcp-auth secrets handling, etc. but also:minikube ip
)example (using: https://kind.sigs.k8s.io/docs/user/ingress/)
❯ minikube start --nodes=3
😄 minikube v1.25.1 on Opensuse-Tumbleweed
✨ Automatically selected the docker driver. Other choices: kvm2, virtualbox, ssh
💨 For improved Docker performance, enable the overlay Linux kernel module using 'modprobe overlay'
❗ docker is currently using the btrfs storage driver, consider switching to overlay2 for better performance
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=5333MB) ...
🐳 Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
▪ kubelet.housekeeping-interval=5m
▪ kubelet.cni-conf-dir=/etc/cni/net.mk
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
👍 Starting worker node minikube-m02 in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=5333MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.49.2
🐳 Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
▪ env NO_PROXY=192.168.49.2
🔎 Verifying Kubernetes components...
👍 Starting worker node minikube-m03 in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=5333MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.49.2,192.168.49.3
🐳 Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
▪ env NO_PROXY=192.168.49.2
▪ env NO_PROXY=192.168.49.2,192.168.49.3
🔎 Verifying Kubernetes components...
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
❯ minikube addons enable ingress
▪ Using image k8s.gcr.io/ingress-nginx/controller:v1.1.1
▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
🔎 Verifying ingress addon...
🌟 The 'ingress' addon is enabled
❯ kubectl get pods -n ingress-nginx
❯ kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml
pod/foo-app created
service/foo-service created
pod/bar-app created
service/bar-service created
ingress.networking.k8s.io/example-ingress created
❯ kubectl get ingress
... (wait a bit) ...
❯ kubectl get ingress
❯ kubectl get nodes -o wide
❯ kubectl get pods -o wide
❯ minikube ip
192.168.49.2
foo
bar
for ref - the content of https://kind.sigs.k8s.io/examples/ingress/usage.yaml: