-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: translator reports errors for existing clusters and secretes #4707
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #4707 +/- ##
==========================================
- Coverage 65.55% 65.53% -0.02%
==========================================
Files 211 211
Lines 31972 31961 -11
==========================================
- Hits 20960 20947 -13
- Misses 9768 9772 +4
+ Partials 1244 1242 -2 ☔ View full report in Codecov by Sentry. |
87b5867
to
15264f6
Compare
15264f6
to
bda736c
Compare
21c1901
to
a7d7e6b
Compare
Signed-off-by: Huabing Zhao <zhaohuabing@gmail.com>
a7d7e6b
to
bd01257
Compare
hey @zhaohuabing is the issue that we are generating the same name for the jwks and oidc clusters if both are set in the policy ? shouldnt we be using an additional string value to differentiate them ? |
For the cluster generated by a url, EG uses the host and port for the cluster name to avoid creating duplicated clusters for the same host+port combination.
The name of the generated cluster: oidc_example_com_443. We could change this logic to generate an unique name for each single oidc or jwt configuration. However, we should also ensure that the translator shouldn't throw error if the cluster already exists. |
is it safe to reuse the same cluster configuration ? is the naming different when use the backendRefs field ? |
It's safe to reuse the asme cluster for ulr generated cluster as the cluster confiugration is identical for the same host+port combination. For OIDC provider with backendRefs, EG generate an unique cluster name like |
func addXdsCluster(tCtx *types.ResourceVersionTable, args *xdsClusterArgs) error { | ||
// Return early if cluster with the same name exists | ||
if c := findXdsCluster(tCtx, args.name); c != nil { | ||
return ErrXdsClusterExists | ||
return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not great from an API perspective for addXdsCluster
, because the the old cluster with the same name may not have the same xdsClusterArgs
as the current request, so the caller should decide how to handle this case, not this method imo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a strong opinion, but should the caller call findXdsCluster
first and decide whether they should handle this situation?
The current approach has an implict assumption: every callers should check whether the return error isErrXdsClusterExists or not, and ignore ErrXdsClusterExists - this cause code duplications for every caller and can cause issues if the caller forget to handle this special case, like this one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're absolutely right, the method is doing two many things, fine to split it up, but developers will need to make sure they call both functions when writing new logic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For all the current callers, they just need to call addXdsCluster
as they just simply ignore ErrXdsClusterExists
and does nothing. For them, they can all safely assume the xdsClusterArgs is the same for the clusters with the same name.
I guess this pattern comes from the Kub Client API - it totally makes sense for the Kub Client API as there is a race condition between the client and the API server. However, EG doesn't need to use the same pattern here as it's a local cache and has no concurrent writes.
Ha, there is also a bug here, the index is always 0, which generates duplicated name for different clusters if there're both ext auth and oidc whithin a SecurityPolicy. Will fix it in this PR as well. |
nice catch, prob needs another prefix like |
Signed-off-by: Huabing Zhao <zhaohuabing@gmail.com>
Signed-off-by: Huabing Zhao <zhaohuabing@gmail.com>
Signed-off-by: Huabing Zhao <zhaohuabing@gmail.com>
Signed-off-by: Huabing Zhao <zhaohuabing@gmail.com>
b240792
to
d4ec015
Compare
Signed-off-by: Huabing Zhao <zhaohuabing@gmail.com>
} | ||
|
||
var oidc *ir.OIDC | ||
if policy.Spec.OIDC != nil { | ||
if oidc, err = t.buildOIDC( | ||
policy, | ||
resources, | ||
gtwCtx.envoyProxy); err != nil { | ||
gtwCtx.envoyProxy, // TODO zhaohuabing: Only the last EnvoyProxy will be used as the OIDC name doesn't include the cluster index |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a minor issue, will address it in a follow-up PR.
Fixes #4706 xDS translation failed when oidc tokenEndpoint and jwt remoteJWKS are specified within the same security policy and using the same hostname
Refactor: skips adding the cluster/secrets and returns nil to make the code cleaner and easier to maintain. It's safe to remove
ErrXdsClusterExists
andErrXdsSecretsExists
as they don't need to be handled in any places.Release Notes: Yes
Test before the fix:
After: