-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add e2e testing #5
Comments
#12 adds initial e2e structure in directories |
FWIW, the
|
FTR, #62 fixes the breakage. |
The next problem is that the |
After that, and deploying the nginx controller, the problem is that the controller cannot talk to the API server:
|
That's apparently because |
I hacked
Yet it still does not work:
@bprashanth can you point me to a working setup of this kind somewhere in the k8s ecosystem? https://github.com/kubernetes/community/blob/master/contributors/devel/local-cluster/docker.md starts with a big bold "Stop, use minikube instead". |
@porridge this should work if you set the master address like |
To summarize our chat, |
It would be great to not have to run with host networking in the e2e since most people don't in reality, but I think it's fine for a first cut with a TODO. so to confirm, hostNetwork is just a workaround for setting KUBERNETES_MASTER, which defaults to localhost instead? |
@porridge do these docs help https://github.com/kubernetes/ingress/blob/master/docs/dev/setup.md ? (try local-up-cluster, or minikube as described, if they've released a version with the nginx addon) |
@bprashanth I think it was a bit more complicated:
After your suggestion, I tried with the cluster brought up by
Turns out this (as well as [3]) was caused by my laptop's overzealous firewall. I haven't tracked down the cause for (1) though. Perhaps I should try minikube, as I fear trying to teach ufw what to let through will be hard, given that I'm somewhat confused about this networking myself. |
@porridge -- the security token issue seems to be related to setting rshared on the mount point for kubelet: I was able to get the hyperkube containerize kubelet running after doing: $ mount --bind /var/lib/kubelet /var/lib/kubelet
$ mount --make-rshared /var/lib/kubelet and passing |
I made some baby steps towards using minikube. One questions is what I should do with |
@porridge -- if you want an example using hyperkube, you can take a look at this: https://github.com/kubernetes/dns/tree/master/pkg/e2e It starts an API server + controller manager in a container. There are some things that need to be resolved with the containerized mounter, but it works... The kickoff script is here: https://github.com/kubernetes/dns/build/e2e-test.sh |
@bowei ingress/hack already contains code to start a hyperkube-based cluster, so I'm not sure we need more examples. However I'm not sure we should be going that way given the problems I mentioned:
Of course I don't know yet how much better minikube is going to be, but the fact that it's more hermetic is promising. |
|
@aledbf if there is common interest, this may be worth splitting off into a common framework. Then we can mutually benefit from the work. Let me know which direction you guys decide to go... |
Yes please :) |
@bprashanth please comment about this ^^ |
Yeah of course, what are we thinking - a bootstrap hyperkube for travis incubator? |
I'm happy to propose the project and drive it |
Thanks! Suggest an email to kubernetes-dev per https://github.com/kubernetes/community/blob/master/incubator.md#existing-code-in-kubernetes, the test infra team might have some thoughts (or recommend putting it in https://github.com/kubernetes/test-infra or https://github.com/kubernetes/repo-infra) |
I'd appreciate it if you could keep me in the loop.
|
@bowei was there any movement on this? |
Sorry about the delay -- will send something post-code freeze... |
@Beeps sorry it took me this long to test this on the e2e cluster from the main kubernetes repo like you suggested in December, but it does not work either - by the looks of it, for the same reason as with ingress repo's own local cluster case:
|
Status update: I (temporarily) ditched the attempts to get this running on my laptop, and moved my dev environment onto a workstation, to rule out the interference from the local firewall. There I was able to successfully play with the nginx controller on a local cluster brought up with Starting the same controller on the cluster launched with
I think I'm going to start working on some first e2e test cases, using that former cluster for the time being, while waiting for @bowei to start the effort towards common bootstrap (now that 1.6 is released, hint hint). |
One interesting problem - worth thinking about when designing this common e2e test infrastructure: I when trying to
Apparently, this is not because of version difference, but because flags are global, and the two ginkgo packages are treated as unrelated ones, so their (identical) For lack of better ideas, I worked this around for now by removing |
srsly the |
This is a proof of concept for kubernetes/ingress-nginx#5 (comment) for only one of the flags.
@onsi I took a look and:
Please take a look at onsi/ginkgo@master...porridge:flag-tolerant which I think is a reasonable compromise between backwards-compatibility and making is possible for vendored ginkgo to work at all. |
Hi, is the "help wanted" label still valid here? If yes, I'd like to join the ride. Is this work #1331 also related to this issue? |
Yes |
This issue was moved to kubernetes/ingress-gce#16 |
* Skip create location if cluster ip is empty * Add service event handler
Work with karmada
# This is the 1st commit message: Add feature flag to disable annotation prefix check in admission controller # This is the commit message kubernetes#2: Add log message for when ingress hits annotation check Indicates that the correct environment variable is set. Does not log on absence of environment variable. # This is the commit message kubernetes#3: Add flag for disabling legacy ingress class annotation prefix check # This is the commit message kubernetes#4: Remove negation from if statement on annotation prefix check # This is the commit message kubernetes#5: Add logline to indicate annotation prefix check is skipped
kubernetes-retired/contrib#1441 (comment)
@porridge fyi
The text was updated successfully, but these errors were encountered: