-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Contour Ingress & Gateway #468
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! Two comments and then lgtm
Co-authored-by: eaudetcobello <155978570+eaudetcobello@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost there! Nice work here @louiseschmidtgen !
Ah interesting the api-group seems to be simply projectcontour.io:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved due to pressure of time, but we need to revisit the tests, as they are a possible source of flakes in the future.
# Get gateway node port | ||
gateway_http_port = None | ||
util.stubbornly(retries=5, delay_s=2).on(session_instance).until( | ||
lambda p: "cilium-gateway-my-gateway" in p.stdout.decode() | ||
lambda p: "my-gateway" in p.stdout.decode() | ||
).exec(["k8s", "kubectl", "get", "service", "-o", "json"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this code here can be dangerous. we repeatedly get the list of services until my-gateway
appears on the text. this is right, as the service might take some time to appear (perhaps the 5 retries with 2 seconds backoff is also very short).
However, we then retrieve them again to parse the service info and retrieve the nodePort. Note that this nodePort might not be available upon service creation, so there is a race here: If the test retrieves the service before the ports are configured, then our test will flake.
Instead, we should do all of the logic inside the util.stubbornly
. Currently, it's using a lambda, but we should probably write this as a proper function instead, then do the retries until we get the nodePort we were after.
If we don't get the nodePort in time, that's OK, the timeout here will tell us about the failure
lambda p: "ingress" in p.stdout.decode() | ||
).exec(["k8s", "kubectl", "get", "service", "-A", "-o", "json"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here, we fetch services until we see some arbitrary output, then fetch them again. this should all be done inside the util.stubbornly
here
* put a little note in docs on disabling network * first draft of contour ingress and gateway * put contour chart * minor fixes * put some fakes * more wip * helm client create ns for contour * gateway * testy yamls * gateway provisioner * ck-gateway * more gateway * gateway provider manifest to chart * remove that * add mini tls delegation chart * some cleanup * set use-proxy-protocol * yamling, put shared crds in their own chart * forgot this yaml * some more comments * gateway class renamed ck-gateway, some renaming * testing adjustments & cleanup * fix ingress class name * fix cilium ingress class * try improve tests * lint * lint2 * lint * fix test * change contour chart version * update component contour * update ingress test * little typo * ingressClassName patch * typo * cleanup cilium ingress class * update dir * cleanup * make contour charts updatable * fix gateway yamls * comments * lost braket, * beautify * comments * move tlscertdelegation * lint * undo move enabled config * update charts * match on label my-gateway * improve label check * linter * make ingress check more precise * update helm pull for contour * wait for contour common crds * add missing returns * change name in api resources await * change chart helm pull * missed a little name change * comment update Co-authored-by: eaudetcobello <155978570+eaudetcobello@users.noreply.github.com> * update resources for group version * rewrite wait for crds for two groups * update wait for crds * the rest of my comment * maybe now I understand api group conventions * Revert "maybe now I understand api group conventions" This reverts commit 854f205. * correct api-resource name * update chart helm pull * remove resource.group check optional param * cleanup * make it two loops * add images * comments --------- Co-authored-by: eaudetcobello <155978570+eaudetcobello@users.noreply.github.com>
Contour Ingress & Gateway
Ingress
When we
k8s enable ingress
we get a static deployment of contour:And we get this IngressClass:
When you disable ingress, those will get removed.
When you
k8s set default-tls-secret=bananas
you will get a TLSCertificateDelegation resource (from the mini /ck-ingress-tls chart) which you can use to point to your bananas secret. See hereNow for
k8s set enable-proxy-protocol=true
this sets a config arg for the static deployment of contour. Contour gets restarted and voila there is an extra Arg:Gateway
Now when you do
k8s enable gateway
you get this dynamic gateway provisioner.This helm chart lives in ck-gateway. I also added a gateway class to the chart with which people can create their gateway resource.
Additionally, another chart ck-gateway-contour installs:
You can
k8s disable gateway
and it will remove all gateway resources.Shared CRDS
Gateway and Ingress share some common contour CRDs. These are moved to their own chart which lives in ck-contour-common.
Changes Cilium
Deploy a second gatewayclass named ck-gateway.
List of images (Moonray):