-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Role.iam silently removes [] from spec.forProvider fields #1357
Comments
We're seeing the exact same behavior... thanks for opening this bug. |
The issue is caused by the I removed @ulucinar Do you remember why we're always setting omitempty? Conceptually, it seems unnecessary, and while it's harmless in most cases, it's causing a problem here where the empty state is meaningful. |
We discussed this at the SIG-Upjet meeting today. I think the video gets automatically uploaded to youtube, but I don't know the exact link.
Background and terminology: In AWS IAM, a role supports two different types of policies: managed and inline. The main difference is that an inline policy can only ever be attached to a single role, while a managed policy can be attached to zero or more roles, groups, or users. They are also counted differently against various AWS quotas. A managed policy can either be AWS-managed and simply referenced by the AWS-documented ARN, or customer-managed, and created by a The When the This left no way to manage inline role policies using this provider until version 0.40.0, which moved It's also worth noting that at the time of provider release v0.40.0 we were still forking a terraform cli process for every reconciliation of every managed resource, which was extremely resource-intensive. There were some short-term pressures for ways to reduce the number of managed resources, that became moot once we switched to the no-fork architecture with dramatically improved performance. References: kubernetes/kubernetes#125317 kubernetes/kubernetes#124050 Includes the quote
although I could not find that message in the linked issue. #745 contains significant discussion around the decision to add Some possible approaches for a solution:
|
Recording: https://www.youtube.com/watch?v=eeTbRLjDtcc |
I believe it is the same underlying issue but we ran into this from a slightly different angle: when trying to remove all
You then want to remove that policy for some reason and change your spec to:
After applying.... the "arn:aws:iam::aws:policy/some-policy-name" policy will remain associated with the role with no warnings. This is confusing (and potentially dangerous if the intention is to revoke permissions). And inconsistent with what the documentation says as mentioned above. (side note: the snake case vs camel case inconsistency in the docs is a bit confusing as well). The only way we have found to actually remove the policy from the role is to set:
as alluded to here.
|
Is there an existing issue for this?
Affected Resource(s)
Resource MRs required to reproduce the bug
Steps to Reproduce
Start running
kubectl get -o yaml role.iam.aws.upbound.io --watch
Apply the manifest to the cluster. I observe the same behavior regardless of whether it is server-side or client-side applied.
Observe that while the very first version of the resource contains
managedPolicyArns: []
as specified, every subsequent version has the key removed.Here are the first few observed versions of the resource:
What happened?
The
managedPolicyArns
field in the iam Role resource is a bit unusual. Its description says that setting an empty array will result in disassociating any policies attached out-of-band, while setting it to null will simply ignore attached policies.I would like to use crossplane to enforce that additional policies are not attached to a certain IAM role, but it seems that the provider (or some other part of the reconciliation machinery) is removing the explicitly set
[]
value, so I can't actually get the managed resource to have the state it would need to enforce no additional policies.Relevant Error Output Snippet
No response
Crossplane Version
1.14
Provider Version
1.6.0
Kubernetes Version
v1.30.0
Kubernetes Distribution
kind
Additional Info
No response
The text was updated successfully, but these errors were encountered: