-
Notifications
You must be signed in to change notification settings - Fork 459
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New "app.kubernetes.io/version" label added to collector deployment selector cause reconciliation error #840
Comments
@DWonMtl thanks for reporting. Would you like to submit a fix? The version label was added in #797 by @yuriolisa (just pinging for FYI :)). It might have been unintentional. |
@pavolloffay, thank you for the heads up. I can sign up for this issue. |
@pavolloffay , indeed I can work on it. @yuriolisa let me know if you are already working on it, if not I will start. Also I wasn't sure about the direction your team wanted to take according to this issue. Many solution are possible. The simplest one would be to remove the version labels, but I think this make sense to have it as a label. One we often do in my teams for these kind of "label" and "selector" issues is to have a function for selectors that returns the "static labels" and a function for labels that happens the more volatile labels to the list of selector labels. ex : func Selectors(instance v1alpha1.OpenTelemetryCollector, filterLabels []string) map[string]string {
// ... add the static labels
}
func Labels(instance v1alpha1.OpenTelemetryCollector, filterLabels []string) map[string]string {
base := Selectors(instance, filterLabels)
// ... add the volatile labels
return base
} |
@DWonMtl, is all yours. You can start working on it. The approach was to ensure that we have the required labels in place for our OTEL resources. However, I missed that Deployment Selector part which is causing this bug. Thank you for reporting that. |
ok ! on it |
I wonder if this is what's causing the failure I'm seeing too:
See mention on #872 (comment) My deployments sha256 container hashes to specify the container versions. Cause looks like 0dce2df from #797
|
Indeed, if you version is a 64 char sha, this will break. |
I'll add a test to the test suite |
Filed as #879 since it's a different issue to the selector change |
You'll get the same bug when you downgrade from v0.50.0 to v0.48.0 to avoid this. You can just
or similar - i.e. delete the You can find all instances with
|
Description
A new label has been introduced recently in the deployment selector
see https://github.com/open-telemetry/opentelemetry-operator/blob/v0.49.0/pkg/collector/labels.go#L51
This is perfectly legit to add a version in labels, however these labels are added to the deployment "Selector"
see https://github.com/open-telemetry/opentelemetry-operator/blob/v0.49.0/pkg/collector/deployment.go#L46
When the operator tries to reconcile the deployment, it face these error
The reason is because the
Deployment
Selector
is an "immutable field" and cannot be modified once created.Since this label refers to the application version contained in
OpenTelemetryCollector
spec.image
. each time we will change the image version, we will face this reconciliation error.The text was updated successfully, but these errors were encountered: