-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pods failing with ImagePullBackOff #312
Comments
This seems similar to the issue from #236. We are investigating. |
We have observed this issue as well as part of our work to enable Liberty on ARO: https://docs.microsoft.com/en-us/azure/developer/java/ee/websphere-family#open-liberty-and-websphere-liberty-on-aro. A fix would be highly appreciated. |
I'm hitting this on a regular basis so would really appreciate it being fixed. Thank you. |
We've just merged a PR into main which should fix this issue. |
Great news! |
Open Liberty Operator v0.8.1 is now released with the fix for this issue. Release information is documented here. |
Bug Report
What did you do?
Build, package, and push an Openliberty container image to the Openshift internal image registry as image stream
app-modernization:v1.0.0
Note: I have a simple demo app, with instructions in the Git repo here -> https://github.com/OpenShift-Z/openliberty-operator-ocpz#build-and-push-the-container-image
Create an OpenLibertyApplication CR (copy the example yaml to a file olapp.yaml)
What did you expect to see?
The pod should start normally without having to delete the pod first
What did you see instead?
The pod starts before the necessary objects are in place to permit the container image to be pulled from the internal image registry. (eg. secret, service account, role, and rolebinding)
Environment
Possible solution
I believe the Openliberty Operator may be creating the deployment/pod resource before a requisite service account, secret, and associated Role & Rolebinding is created. Thus, the necessary authorization isn't established yet, when the pod starts.
Merely deleting the pod and letting the deployment recreate the pod seems to work. Suggesting this may be a timing/synchronization issue.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: