Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Patching ovn image is failing #218

Open
venkataanil opened this issue Oct 17, 2023 · 0 comments · May be fixed by #219
Open

Patching ovn image is failing #218

venkataanil opened this issue Oct 17, 2023 · 0 comments · May be fixed by #219

Comments

@venkataanil
Copy link
Contributor

venkataanil commented Oct 17, 2023

Currently scale-ci-deploy is using below command to patch ovn image

oc -n openshift-network-operator set env deployment.apps/network-operator OVN_IMAGE={{openshift_ovn_image}} RELEASE_VERSION="5.0.0"`

Howerver this patching is failing because of specifying RELEASE_VERSION.
network-operator pod is stuck in "CrashLoopBackOff" state

[root@ip-172-31-32-243 venkataanil-ovn-4.14-aws-ovn-large-cp]# oc -n openshift-network-operator get all
Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+
NAME                                   READY   STATUS             RESTARTS      AGE
pod/network-operator-b8549fd9d-5mxbf   0/1     CrashLoopBackOff   8 (53s ago)   46m

NAME              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/metrics   ClusterIP   None         <none>        9104/TCP   69m

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/network-operator   0/1     1            0           69m

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/network-operator-b8549fd9d   1         1         0       46m
replicaset.apps/network-operator-d77fcbf59   0         0         0       69m

network-opertor log is showing below stacktrace

runtime/debug.Stack()
        runtime/debug/stack.go:24 +0x65
sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot()
        sigs.k8s.io/controller-runtime@v0.15.0/pkg/log/log.go:59 +0xbd
sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithName(0xc000705d00, {0x2d7c20f, 0x14})
        sigs.k8s.io/controller-runtime@v0.15.0/pkg/log/deleg.go:147 +0x4c
github.com/go-logr/logr.Logger.WithName({{0x3232058, 0xc000705d00}, 0x0}, {0x2d7c20f?, 0xa?})
        github.com/go-logr/logr@v1.2.4/logr.go:336 +0x46
sigs.k8s.io/controller-runtime/pkg/client.newClient(0xc0008786c0, {0x0, 0x0, {0x3233d80, 0xc000e41ba0}, 0x0, {0x0, 0x0}, 0x0})
        sigs.k8s.io/controller-runtime@v0.15.0/pkg/client/client.go:115 +0xb4
sigs.k8s.io/controller-runtime/pkg/client.New(0x3218fb0?, {0x0, 0x0, {0x3233d80, 0xc000e41ba0}, 0x0, {0x0, 0x0}, 0x0})
        sigs.k8s.io/controller-runtime@v0.15.0/pkg/client/client.go:101 +0x85
github.com/openshift/cluster-network-operator/pkg/client.NewClusterClient(0xc0008786c0, 0xc000cc2240)
        github.com/openshift/cluster-network-operator/pkg/client/client.go:188 +0x2b0
github.com/openshift/cluster-network-operator/pkg/client.NewClient(0xa33df3cc5fe7ce33?, 0x3e8d4389d324349d?, {0x2d5d7a5, 0x7}, 0xf5c8fb05247353c9?)
        github.com/openshift/cluster-network-operator/pkg/client/client.go:100 +0xa5
github.com/openshift/cluster-network-operator/pkg/operator.RunOperator({0x322acf0, 0xc0003816d0}, 0xc000b3cf80, {0x2d5d7a5, 0x7}, 0x3213058?)
        github.com/openshift/cluster-network-operator/pkg/operator/operator.go:44 +0xbd
main.newNetworkOperatorCommand.func2({0x322acf0?, 0xc0003816d0?}, 0x4d32fa0?)
        github.com/openshift/cluster-network-operator/cmd/cluster-network-operator/main.go:49 +0x3b
github.com/openshift/library-go/pkg/controller/controllercmd.ControllerBuilder.getOnStartedLeadingFunc.func1.1()
        github.com/openshift/library-go@v0.0.0-20230503144409-4cb26a344c37/pkg/controller/controllercmd/builder.go:351 +0x74
created by github.com/openshift/library-go/pkg/controller/controllercmd.ControllerBuilder.getOnStartedLeadingFunc.func1
        github.com/openshift/library-go@v0.0.0-20230503144409-4cb26a344c37/pkg/controller/controllercmd/builder.go:349 +0x10a
I1017 10:56:17.869166       1 operator.go:81] Creating status manager for stand-alone cluster
I1017 10:56:17.869203       1 operator.go:86] Adding controller-runtime controllers
I1017 10:56:17.869532       1 operconfig_controller.go:100] Waiting for feature gates initialization...
I1017 10:56:17.869551       1 simple_featuregate_reader.go:171] Starting feature-gate-detector
E1017 10:56:17.871646       1 simple_featuregate_reader.go:290] cluster failed with : unable to determine features: missing desired version "5.0.0" in featuregates.config.openshift.io/cluster
**E1017 10:56:17.871683       1 simple_featuregate_reader.go:290] cluster failed with : unable to determine features: missing desired version "5.0.0" in featuregates.config.openshift.io/cluster**

We are not sure why we are overriding RELEASE_VERSION when we only want OVN_IMAGE to get overriden. To fix this issue we should only override OVN_IMAGE when user wanted ovn patching.

venkataanil added a commit to venkataanil/scale-ci-deploy that referenced this issue Oct 17, 2023
network operator is stuck in CrashLoppBackOff state when we are
overriding RELEASE_VERSION during ovn patching. The purpose of
this ansible task is to only override OVN_MAGE, we will avoid
changing RELEASE_VERSION in this task

Fixes cloud-bulldozer#218
venkataanil added a commit to venkataanil/scale-ci-deploy that referenced this issue Oct 17, 2023
network operator is stuck in CrashLoppBackOff state when we are
overriding RELEASE_VERSION during ovn patching. The purpose of
this ansible task is to only override OVN_MAGE, we will avoid
changing RELEASE_VERSION in this task

Fixes cloud-bulldozer#218
@venkataanil venkataanil linked a pull request Oct 17, 2023 that will close this issue
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant