-
Notifications
You must be signed in to change notification settings - Fork 490
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support eventual consistency in conformance tests #1080
Support eventual consistency in conformance tests #1080
Conversation
Hi @nathancoleman. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Any chance that we could instead require a conservative I ask because making the Gateway API eventually consistent (rather than conservatively |
Hi @evankanderson ! There was some discussion about adding a The context from the discussion at yesterday's meeting might be helpful. The recording is up on YouTube here (linked to appropriate timestamp) and the meeting notes are available here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI I've been able to run the conformance tests using this PR against Contour's implementation successfully, whereas the tests in master
where flaky for us. Thanks!
conformance/utils/http/http.go
Outdated
return true | ||
}, maxConsistencyPeriodPerRequest, 1*time.Second, "error making request, never got expected status") | ||
|
||
// Once we've made a successful request and gotten a response with the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry to nitpick this a lot... a bit worried about the logic here. Typically what I have seen is that config changes will go from "all old config" to "mix of old and new" before arriving at "new config". So if we assert that once we see the 'new' state its immediately always 'new', we may see issues.
For example, if we have 2 pods implementing the gateway and 1 gets the new config first.
Here is how the Istio logic for this works: https://github.com/istio/istio/blob/30a866fdb5ad65eccde3dafe21d19f455addbd72/pkg/test/util/retry/retry.go#L182.
Basically we test that we get a success N times in a row -- but if we fail part way through the check we continue on retrying.
Also a bit of nitpick, but I am not sure we need the consistency_check_%d
to show up in the test result? Doing the above would make that pretty hard to retain anyways
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with the logic suggested here by @howardjohn. Maybe a fairly low timeout like 30 seconds to get 3 consecutive successful requests would work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Not nitpicking IMO @howardjohn ! This makes a lot of sense and will help us avoid similar issues when we make the number of pods implementing the gateway configurable beyond 1
in the near future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@howardjohn I've pushed some updates to handle intermittent success in the style of what you linked. Let me know what ya think!
You can see an example run w/ log output under Run tests here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the work on this @nathancoleman! Agree with John's suggestion but otherwise this LGTM.
conformance/utils/http/http.go
Outdated
return true | ||
}, maxConsistencyPeriodPerRequest, 1*time.Second, "error making request, never got expected status") | ||
|
||
// Once we've made a successful request and gotten a response with the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with the logic suggested here by @howardjohn. Maybe a fairly low timeout like 30 seconds to get 3 consecutive successful requests would work?
@evankanderson I really wish that was possible. That was my original intent here, but unfortunately I think it was a bit too optimistic. Many (maybe most) of the implementations of this API simply don't have a way to know when the config they've written has actually been picked up and implemented. If there are good ways around this, I'm very open to alternatives, because I agree that a reliable "ready" condition on Routes would be more helpful than a condition that indicated that config had been updated and should be ready to use soon. |
/ok-to-test |
Two thoughts:
|
@evankanderson @robscott I believe that future status additions for Those evolutions in the Gateway API are only tangentially related to this PR. My goal with this change is just to unblock several implementations that are unable to run the conformance tests successfully today. Would it make sense to you guys to move this discussion over to that venue? I think that would allow for more participation as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @nathancoleman! A few more tiny nits but otherwise this LGTM.
Thanks @nathancoleman! /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: nathancoleman, robscott, skriss The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind bug
(best effort ^, is there a better option?)
What this PR does / why we need it:
We discussed in the sig meeting recently that "accepted" (or "ready" in the future) can only really indicate that the controller has synced all necessary configuration and that routes will begin working "soon".
In order for the conformance tests to support this, they must accept that HTTP requests to new routes may fail for some period of time but should become eventually consistent.
Which issue(s) this PR fixes:
Fixes #
Does this PR introduce a user-facing change?: