Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

document the embedded oc compatability fallout #477

Closed
gabemontero opened this issue Jan 11, 2018 · 5 comments
Closed

document the embedded oc compatability fallout #477

gabemontero opened this issue Jan 11, 2018 · 5 comments

Comments

@gabemontero
Copy link
Contributor

Follow up to the recent 3.7 GA fallout, and various openshift-sme email threads, including https://mail.google.com/mail/u/0/#label/openshift-sme/16036b7c1d422563

  1. @bparees 's last summation was I think pretty concise and comprehensive:
There has been a bit of churn in this space lately, but the thing to check is what version of the oc client is in each of the slave images you're using.  The v3.7 oc client is *not* backwards compatible to 3.5.

The current state of the world should be:
jenkins-*-rhel7:v3.7 - contains oc 3.7 binary
jenkins-*-rhel7:v3.6 - contains oc 3.7 binary
jenkins-*-rhel7:latest - contains oc 3.6 binary  ("latest" points to the v3.6 image and is going to stay that way going forward).

jenkins-*-centos7:v3.7 - contains the oc 3.7 binary
jenkins-*-centos7:v3.6 - contains the oc 3.6 binary
jenkins-*-centos7:latest - contains the oc 3.7 binary (i know this seems inconsistent, but we don't really make compatibility guarantees about our centos images.)

On top of that, because the slave pod configurations do not specify "pull always" you may have older/different images on your nodes depending when the images were pulled.  We've just changed that default to pull always to avoid this problem in the future, but for you now you should update your own slave configurations to pull always to get consistency around what is being used on each node.

  1. @bparees - was there a bugzilla or something already in play to handle this that I'm forgetting? also, I'm thinking references the details here in this repo's readme and a link to it, referencing the issue generically, in doc.openshift.io. Thoughts / is there another required path?

@bmcelvee fyi

@bparees
Copy link
Contributor

bparees commented Jan 11, 2018

@gabemontero no existing BZ, putting it in the jenkins repo readme for now seems like a reasonable starting point. There's probably something that could be said here too about being aware of what oc client your jenkins image contains (and aligning your jenkins image version w/ your cluster version):
https://docs.openshift.org/latest/using_images/other_images/jenkins.html

possibly in here:
https://docs.openshift.org/latest/using_images/other_images/jenkins.html#client-plugin-in

but possibly just in the general image overview since the oc client can be used outside of the plugin too.

Note that @jim-minter is currently reworking/refactoring some of that doc, so you might be able to talk him into adding some content, if not you'll want to wait until his changes land.

@gabemontero
Copy link
Contributor Author

sounds good / thanks @bparees

I'm good with waiting until @jim-minter is finished with his current work (if I recall correctly it is targeted for delivery pretty soon), unless @jim-minter feels so inclined.

@jim-minter
Copy link
Contributor

@gabemontero openshift/openshift-docs#6981 includes the following text in the new using_images/other_images/jenkins_slaves.adoc:

IMPORTANT: Use and/or extend an appropriate slave image version for the version
of {product-title} that you are using.  If the `oc` client version embedded in
the slave image is not compatible with the {product-title} version, unexpected
behaviour may result.
ifdef::openshift-enterprise,openshift-dedicated[]
See the xref:../../release_notes/index.adoc#release-versioning-policy[versioning
policy] for more information.
endif::[]

Also in using_images/other_images/jenkins.adoc, it updates the example ConfigMap example to specify


<alwaysPullImage>true</alwaysPullImage>

instead of


<alwaysPullImage>false</alwaysPullImage>

@gabemontero
Copy link
Contributor Author

Cool @jim-minter - thanks

The only potential delta we might want to add are the caveats around the latest tag and the rhel vs. centos differences.

If you want to take a crack at that in your pull and we can debate / harden the details there, great. Or I'm fine with percolating on it a bit more and then submitting a separate PR after yours merges, using what you've got as the starting point / location.

scoheb added a commit to scoheb/ci-pipeline that referenced this issue Jan 18, 2018
Since v3.7 of openshift, there is the possibility for breakage in
backwards compatibilty when using the v3.7 'oc' command with a 'v3.6'
master.

This change forces us to use the v3.6 oc binary

This is a better practice in any event, since we should not be pointing
to a 'latest' tag.

See openshift/jenkins-sync-plugin#173 and openshift/jenkins#477 for details
@gabemontero
Copy link
Contributor Author

Upon further reflection, I think the precise tag / oc binary mapping is type of details suited to this repo's readme vs. clutter in the openshift-docs (where I think the detail @jim-minter went with there is appropriate).

Unless I hear compelling dissension I'll craft a PR, collect editorial comments, and merge.

scoheb added a commit to CentOS-PaaS-SIG/ci-pipeline that referenced this issue Feb 6, 2018
Since v3.7 of openshift, there is the possibility for breakage in
backwards compatibilty when using the v3.7 'oc' command with a 'v3.6'
master.

This change forces us to use the v3.6 oc binary

This is a better practice in any event, since we should not be pointing
to a 'latest' tag.

See openshift/jenkins-sync-plugin#173 and openshift/jenkins#477 for details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants