-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: asoctl import
-ing a ManagedCluster produces an invalid resource
#3805
Comments
I suspect this is an example of a wider issue. In theory, Azure Resource Providers are supposed to be case insensitive, case preserving - it doesn't matter what letter case you use, and the system preserves what you send it. In practice, this is a goal, but one that isn't achieved by all RPs, including it seems, I suspect that what we need is a case correcting conversion when populating the |
We can work around this now in ASO (if we want), but FWIW this is actually on my list to fix in the AKS RP. Just not sure when that fix will land. |
To handle this purely within ASO - which I suspect we want to do to avoid upstream delays with RP changes - we'd need to change our conversion logic for enum values. Currently we convert enum values using direct casts: backendPoolType := string(source.BackendPoolType)
destination.BackendPoolType = BackendPoolType(backendPoolType) This is case preserving, but assumes the source casing we have is correct. We could instead code generate a conversion map, indexed by lowercase string value, allowing case correction as we go. // Package global
var backendPoolTypeConversion = map[string]BackendPoolType {
"nodeip": BackendPoolType_NodeIP,
"nodeipconfiguration": BackendPoolType_NodeIPConfiguration,
}|
// Conversion code
backendPoolType := string(source.BackendPoolType)
if v, ok := backendPoolTypeConversion[backendPoolType]; ok {
destination.BackendPoolType = v
} else {
destination.BackendPoolType = BackendPoolType(backendPoolType)
} |
This still seems to be an issue for me with asoctl v2.7.0. In addition to the changes listed above in the description, I found I also had to change The errors I saw this time were k8s webhook errors. I think when I first ran into this, the webhooks were allowing the values but they were being rejected by the RP. |
Are you using the latest asoctl? I think the fix needs the latest asoctl to perform normalization when it writes the YAML. We didn't change that ASO only accepts the one case. We fixed asoctl to export the right case always. The spec.type thing seems surprising... that's just because you had set the wrong type? |
Yes, this is with asoctl v2.7.0. One difference I see is that asoctl imports an AKS cluster created with the The agent pool |
I've reopened this since it seems like the fix isn't working. The az cli works now because I fixed AKS itself normalizing the case from standard -> Standard internally. I suspect the portal is passing We also should track down the issue with |
On the AgentPool status types, we have both |
Version of Azure Service Operator
v2.5.0
Describe the bug
asoctl import
-ing an AKS cluster created withaz aks create
with only the required parameters produces invalid values that trigger validation errors from the AKS API.I had to make the following tweaks to the generated YAML to get the ManagedCluster to reconcile successfully:
spec.networkProfile.loadBalancerProfile.backendPoolType
: nodeIPConfiguration -> NodeIPConfigurationspec.networkProfile.loadBalancerSku
: Standard -> standardTo Reproduce
Steps to reproduce the behavior:
Expected behavior
asoctl import
produces valid YAML that doesn't require any additional changes to reconcile successfully.Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: