-
-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
install-plugins.sh unexpectedly removed from Docker images prior to 2.357 #1415
Comments
First step: identifying the root cause. As shared in the Gitter channel "jenkinsci/docker" yesterday:
04:56:48 WARNING: could not get a valid manifest at the URL https://index.docker.io/v2/jenkins/jenkins/manifests/2.329-alpine-jdk17.
04:56:48 (For debugging purposes) manifest={"errors":[{"code":"MANIFEST_UNKNOWN","message":"manifest unknown","detail":"unknown tag=2.329-alpine-jdk17"}]}
04:56:48 2.329 not published yet
04:56:48 Version 2.329 higher than 1.0: publishing 2.329 latest_weekly:false latest_lts:false
# ...
05:01:47 WARNING: could not get a valid manifest at the URL https://index.docker.io/v2/jenkins/jenkins/manifests/2.330-alpine-jdk17.
05:01:47 (For debugging purposes) manifest={"errors":[{"code":"MANIFEST_UNKNOWN","message":"manifest unknown","detail":"unknown tag=2.330-alpine-jdk17"}]}
05:01:47 2.330 not published yet
05:01:47 Version 2.330 higher than 1.0: publishing 2.330 latest_weekly:false latest_lts:false
# ...
05:03:25 WARNING: could not get a valid manifest at the URL https://index.docker.io/v2/jenkins/jenkins/manifests/2.331-alpine-jdk17.
05:03:25 (For debugging purposes) manifest={"errors":[{"code":"MANIFEST_UNKNOWN","message":"manifest unknown","detail":"unknown tag=2.331-alpine-jdk17"}]}
05:03:25 2.331 not published yet
05:03:25 Version 2.331 higher than 1.0: publishing 2.331 latest_weekly:false latest_lts:false
# ...
05:05:01 WARNING: could not get a valid manifest at the URL https://index.docker.io/v2/jenkins/jenkins/manifests/2.332-alpine-jdk17.
05:05:01 (For debugging purposes) manifest={"errors":[{"code":"MANIFEST_UNKNOWN","message":"manifest unknown","detail":"unknown tag=2.332-alpine-jdk17"}]}
05:05:01 2.332 not published yet
# ...
05:07:02 WARNING: could not get a valid manifest at the URL https://index.docker.io/v2/jenkins/jenkins/manifests/2.332.1-alpine-jdk17.
05:07:02 (For debugging purposes) manifest={"errors":[{"code":"MANIFEST_UNKNOWN","message":"manifest unknown","detail":"unknown tag=2.332.1-alpine-jdk17"}]}
05:07:02 2.332.1 not published yet
05:07:02 Version 2.332.1 higher than 1.0: publishing 2.332.1 latest_weekly:false latest_lts:false
# ...
10:56:49 Tag is already published: 2.329
10:56:51 Tag is already published: 2.330
10:56:54 Tag is already published: 2.331
# ...
10:56:56 Tag is already published: 2.332
The https://github.com/jenkinsci/docker/blob/master/.ci/publish.sh#L35 is a shell function that is called to "verify" if the version passed as first argument is considered published or need a rebuild. It returns 0 if all the tags of all image for this version have a manifest found in the DockerHub. With this "fragile" code, merging the PR #1399 had the side-effect of making all the version tracked by this script to be marked as "not published" because the new tags introduced in the PR did not exist. |
We confirm this is unexpected. The PR which removed the scripot |
Proposal: rebuilding all the images that were rebuilt (range from
@timja @MarkEWaite @basil @jeremycornett (and any other user) would that be good for you? |
@dduportal thanks for the detailed analysis. I like that plan very much. I'm not clear how to do that image rebuild 1 time, but I trust that you have a way to do it that gives you the confidence that it will work. |
The proposed plan is the following:
WDYT? |
That sounds great to me. Thanks for doing it! |
Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
Opened a PR to support the "hotfix" work with a proposal: #1416. |
Any ideas/objections/tips? |
A simpler change would be to not do the magic version rebuild and just do the latest version and allow passing a parameter or so for LTS. There’s an orthogonal question on image patching as generally it’s somewhat expected that image tags still get OS patching but somewhat complicated in our current setup |
That one looks doable quite easily: there is already a detection of the "latest weekly": is that what you mean by "latest" version? For the parameter, if I understand correctly your proposal, that would mean a pipeline parameter that would say "build the LTS version passed as parameter" ? |
All images were re-published 🥳 |
Confirmed that the republished images contain the expected install-plugins.sh |
Needs to also consider that we sometimes publish two LTS lines at the same time. Out of curiosity, IIUC, it checked that all images exist and if not (necessarily the case if we add a new variant), rebuilt all of them? Could the script not just check for the "main" image (tagged 2.357) and if that's missing, do all of the images? Or check each variant individually and only rebuild what's missing? |
Almost yes: for each "version", the script check if all the tags AND all the arch for this version's images (alpine, jdk11,jdk17, arm jdk11, etc.) are pubished. If any of these variants is missing, then the script rebuild+publish all the variants for this version.
These are 2 valid scenario (if I understand correctly your proposals). The only tricky part is to find a way to only build 1 variant with docker buildx because it is a 1..N relationship (a given image variant can have multiple tags). It's doable at the cost of a bit of shell + The first proposal (e.g. only check for the latest version) sounds safer for me: it reads an environment variable giving it the version to be built.
Based on the proposal above with the env variable, when it happens, a human trigger a build and specify the list of versions as a parameter. The script should be able to parse it (using a separator such an endline) and iterate and that case. |
Sure, more control is nice for security. Might be annoying for regular LTS operation though? Don't we want more automation rather than less? |
There's somewhat of a plan to move the docker publishing to release.ci.jenkins.io and then trigger it as part of the release pipeline. The release pipeline will have a parameter for the exact version which I think should solve this. |
Jenkins and plugins versions report
Environment
What Operating System are you using (both controller, and any agents involved in the problem)?
Ubuntu 20.04 and Debian testing
Reproduction steps
docker pull
to update the Docker imageExpected Results
The 292 line shell script should have remained in the previous Docker images
Actual Results
4 line replacement script is in the Docker image instead of the original script
Anything else?
The original install-plugins.sh script can be downloaded from https://github.com/jenkinsci/docker/raw/0e8271bf693bddcaa76cfdedb8ef5d8ae940b859/install-plugins.sh for those who need an immediate replacement.
The functionality of the install-plugins.sh script is available from the
/bin/jenkins-plugin-cli
program. It uses the plugin installation manager tool to more accurately resolve dependencies, report security issues, and manage plugin versions.The text was updated successfully, but these errors were encountered: