Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[JENKINS-48050] Declarative Pipeline support for dockerNode #681

Merged
merged 22 commits into from
May 24, 2023

Conversation

jglick
Copy link
Member

@jglick jglick commented Aug 24, 2018

Implementation of something akin to JENKINS-48050 but much less ambitious in scope than jenkinsci/pipeline-model-definition-plugin#255 + this patch. Rather than pretending to be a compatible variant of the existing docker agent type, it just introduces a new one. My reasoning is that withDockerContainer (for which agent {docker …} is sugar) is deeply flawed and its specification is essentially the implementation—for some cases it works, for other cases it does not and cannot. dockerRun is a totally different approach with its own set of tradeoffs. For example, dockerRun requires that the image contain a JVM, and uses the equivalent of docker-run from a Java API call made by the master, and keeps the workspace entirely private to the container which also includes the agent JVM; withDockerContainer could use a non-Java-related image, it uses docker-exec with a CLI run on some agent launched by another technique, it requires that some physical agent have access to a Docker server, it uses workspace filesystem mounts with specific Unix permission issues, it imposes special restrictions on ENTRYPOINTs. The list of differences is so long that I cannot imagine any practical scenario in which a user would wish to “transparently” switch from one “implementation” to another merely by installing a new plugin and perhaps flipping some global switch. Same for the kubernetes plugin—this is just a different world, and if you wish to use that technology, editing the agent line in your Jenkinsfile is only the first step.

No attempt yet to allow customization of server or registry credentials, etc. In fact the dockerNode step does not currently support a custom registry at all. (Note that the unfiled patch pretends to accept registry credentials and then pass them to the existing dockerNode credentials argument, but this is wrong: the existing argument is server (dockerd) credentials, not even of the same physical type!) Anyway such additions could be done easily as compatible changes; this PR is here to get a stake in the ground. Miscellaneous requirements listed in JENKINS-48050 such as config-file-provider support are really just requests for tests verifying that everything “just works”, which is far more likely for dockerRun than for withDockerContainer since once the agent is started, you are doing nothing out of the ordinary, and most of that verification is not specific to Declarative either.

I just picked a symbol for the agent type. I do not have a strong opinion. Ideally it would be something that contrasts clearly with the existing docker type, which we ought to deprecate as hopeless.

@batmat @rtyler @ndeloof

" stages {\n" +
" stage('whatever') {\n" +
" steps {\n" +
" sh 'java -version'\n" +
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the basic usage. Pretty straightforward in basic cases: just replace docker with dockerContainer and test.

@abayer abayer requested review from ndeloof and abayer August 24, 2018 14:46
Copy link
Member

@abayer abayer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable at first glance?

@ndeloof
Copy link
Contributor

ndeloof commented Aug 27, 2018

sounds good.
I gave up with JENKINS-48050 as I can't imagine a reasonable solution as long as we have to live with a Java based remoting agent, so feel free to propose such baby-step improvements if you have some thoughts.

Copy link
Member

@batmat batmat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@batmat
Copy link
Member

batmat commented Aug 27, 2018

CI tests run seems fishy though. I do see the new tests skipped on Windows, which is expected, but then I do not see them either under any Linux flavor. Going to double-check and run this locally too.

@batmat
Copy link
Member

batmat commented Aug 27, 2018

OK, my bad 🤦‍♂️, didn't click "Show more", and there're too many tests to make it practical to look for a specific one.
https://ci.jenkins.io/job/Plugins/job/docker-plugin/job/PR-681/2/testReport/io.jenkins.docker.pipeline/DockerAgentTest/ =>

image

https://ci.jenkins.io/job/Plugins/job/docker-plugin/job/PR-681/2/testReport/io.jenkins.docker.pipeline/DockerNodeStepTest/ =>

image

So 👍, 🚢 🇮🇹 ! :-)

@pjdarton
Copy link
Member

pjdarton commented Sep 9, 2019

I've been doing some housekeeping and noticed this PR was here with lots of "approved" notes but wasn't merged by @ndeloof at the time.

My knowledge of pipelines is limited; my knowledge of how to write code to support them is zero; I doubt I can tell the difference between good & bad code here ... hence this message is basically here to ask you all "WTF?" 😉

I did spot that there's some TODO comments in this PR (DockerAgent.java, DockerNodeStepExecution.java), which concerns me 😒

I also have concerns about documentation.
The non-pipeline stuff has lots of built-in help explaining what all the configuration options do, and mostly these are sufficient to get people up and running. However, pipelines don't have linked help-text; pipelines are far more reliant on people googling for answers.
If this plugin is going to gain pipeline support then (a) it'll need to be top-tier code and (b) it'll need to be documented at least as well as the incumbents. To do otherwise will just sow confusion. 😞

Lastly, it seems to me that lots of Jenkins users can't tell the difference between the different docker plugins and end up raising bugs here when they're using the docker pipeline plugin instead ... and I believe that this plugin is more "legacy" than the pipeline one.
IMO Jenkins would be a lot better off if there weren't so many docker plugins competing for attention, and I think the distinction of "pipelines are all the other plugins' responsibility" is nice and clear right now - I fear that adding pipeline support to this plugin may be taking it (and Jenkins as a whole) in the wrong direction.

Note 1: If you want to solve the "too many docker plugins" issue and have a general-purpose high-quality solution, you should also check out https://github.com/KostyaSha/yet-another-docker-plugin - in an ideal world, this plugin (which KostyaSha created) and yet-another-docker-plugin would merge...

Note 2: Re skipping tests on Windows - historically, that's down to the ci.jenkins.io Windows slave VMs not supporting docker - new Windows OSs (can) have docker support, and I'm seeing issues raised by folks expecting this plugin to "just work" on Windows, so it would be nice to be able to test on both... although, personally, I have no experience of docker on windows.

@jglick
Copy link
Member Author

jglick commented Sep 9, 2019

I doubt I can tell the difference between good & bad code here

Sure. Bad code can (usually) be improved, and I am willing to put some time into polishing this up if it has a chance of being merged. I think the bigger question is whether it is a good idea to do this, whether the symbol is sufficiently self-explanatory, etc. Would appreciate any opinion from people maintaining Pipeline features on a day-to-day basis besides @abayer: @dwnusbaum, @bitwiseman, etc.

there's some TODO comments

The two you linked to are RFEs, basically—additional options that could be added compatibly.

pipelines don't have linked help-text

Steps do, in fact, offer in-product help, via the Pipeline Syntax link, and there is a subscreen for Declarative syntax which reads at least some text from the product. I suppose this could offer a DockerAgent/help-image.html in case the meaning of the field is not obvious, and/or DockerAgent/config.jelly could include an f:description note. But if merged, there ought to also be text on jenkins.io explaining this option and contrasting it to others.

it'll need to be documented at least as well as the incumbents

JENKINS-58223 unfortunately means this is a low bar!

lots of Jenkins users can't tell the difference between the different docker plugins

Yes, this is a serious concern, specifically explaining the difference between this and agent docker (implemented in docker-workflow but defined, for historical reasons, in pipeline-model-definition; ditto the related agent dockerfile). Conceptually this should be much easier to explain than withDockerContainer ~ agent docker; I have come to regret writing the original, but it is out there and widely used.

adding pipeline support to this plugin

It already has Pipeline support. This PR is merely adding Declarative syntactic sugar for an existing Scripted-oriented feature: the dockerNode step.

historically, that's down to the ci.jenkins.io Windows slave VMs not supporting docker

As of a few weeks ago, there is a windock label which can run Docker on Windows, so DockerNodeStepTest might be able to start running on Windows too. That would require the Jenkinsfile to be modified to add (I think)

platforms: ['docker', 'windock']

which I cannot (fruitfully) do since I lack write permissions to this repository, so someone with that permission would need to first file a PR making such a change and verifying that it does not break anything, and then merge it, before a contributor PR like this one could even attempt to take advantage of that label.

@pjdarton
Copy link
Member

pjdarton commented Sep 9, 2019

could offer a DockerAgent/help-image.html in case the meaning of the field is not obvious

Even if it's obvious, it should state it explicitly.
Users generally need to be told things twice - if they get the same information from two different routes, they'll generally follow that information. If folks are left to do what's "obvious" then 50% will guess wrong (and then log a gazillion bugs because it didn't work the way they assumed) and the other 50% will be left uncertain they got it right.
I'm a big fan of built-in help text that removes all doubt and ambiguity 😁

...but, before it's worth doing any of that, I think we first need to work out what strategic direction would be best for Jenkins as a whole - we don't want to replace one non-ideal solution with another one we'll regret :-)

It already has Pipeline support

My understanding is that the "pipeline support" within this plugin is minimal and buggy. e.g. the temporary templates it defines conflict with template container-limits because they're indexed by image name not the actual template (because templates don't have an id).
The core DockerCloud functionality works fairly well (because I kept fixing it until it did!), but there's a lot of stuff that's a bit "meh" and has suffered from a lack of attention.

...and, given that this plugin doesn't have any maintainers both able and willing to fix this, I don't see that changing unless someone else who does want that kind of thing added is prepared to drive it forwards.

FYI I'm only here reluctantly as Nick "ran off" ,leaving me in charge. I'm not using docker pipeline functionality where I work, so I can't spend much time on it (I try to limit my time here to just basic maintenance, merging PRs that I'm 100% sure about + any urgent security fixes) rather than major development.
If you want to drive this forwards then I'll happily +1 any request for a co-maintainer...

Docker on Windows

If you can tell me exactly what changes to the (currently one-line long) Jenkinsfile are needed, I could raise one containing those changes ... however, I suspect ci.jenkins.io is unwell right now, as #748 has yet to be noticed and built despite a few hours elapsing.
Also, it'd be necessary to ensure that (at least initially) test failures on Windows were merely informative instead of fatal - in my experience, Microsoft rarely implement a standard without "improving" on it to the point where it no longer works with non-Microsoft software - I suspect that testing on Windows would most likely reveal that Windows docker requires Windows-specific changes ... and I can't develop that locally (at present) because my Windows dev machine is Win7 which doesn't have docker (all my docker resource is linux).

To be honest, I think the best way of getting Windows unit-tests going would be for someone from Cloudbees (with access to the ci.jenkins.co internals) to drive that forwards, as it's likely to require access that non-cloudbees folks don't have.

@jglick
Copy link
Member Author

jglick commented Sep 9, 2019

the "pipeline support" within this plugin is minimal and buggy. e.g. the temporary templates it defines

I do not know all that much about DockerCloud. I was referring to the DockerNodeStep, to which this PR is a direct follow-up.

In a sense, DockerNodeStep can supersede DockerCloud: rather than forcing a Jenkins admin to hard-code and then maintain a list of images, each Jenkinsfile can pick something to run. This step also bypasses the Jenkins queue (Queue, NodeProvisioner, Cloud, Label, ExecutorStepExecution, …) and all of its problems, since the Docker daemon handles scheduling. And compared to withDockerContainer, it avoids all sorts of thorny issues related to --volume vs. USER and workspace file permissions, --volumes-from detection when the master is containerized, ENTRYPOINT vs. sleeping, vagaries of docker exec, …

If you can tell me exactly what changes to the (currently one-line long) Jenkinsfile are needed

Mentioned above, but more explicitly:

-buildPlugin(jenkinsVersions: [null, '2.73.3', '2.89.4', '2.107.1'])
+buildPlugin(jenkinsVersions: [null, '2.73.3', '2.89.4', '2.107.1'], platforms: ['docker', 'windock'])

I suspect that testing on Windows would most likely reveal that Windows docker requires Windows-specific changes

Quite possibly, which is why you would make such a change on a PR, to see what happens.

someone from Cloudbees (with access to the ci.jenkins.co internals) to drive that forwards, as it's likely to require access that non-cloudbees folks don't have

Well I am from CloudBees but do not have access to ci.jenkins.io internals, nor does my local Windows 10 installation have Docker (inside VirtualBox and not sure how to install it), so as in jenkinsci/log-cli-plugin#15 I have just tried doing stuff on the server. Probably not good enough for serious development of Windows-specific code, if some is needed.

@pjdarton
Copy link
Member

pjdarton commented Sep 9, 2019

rather than forcing a Jenkins admin to hard-code and then maintain a list of images, each Jenkinsfile can pick something to run

Where I work, the former is preferred by the majority of my developers, i.e. DockerCloud is king.

Most of them really don't want to have to care what docker image provides the capability that they want.
Most of them tell their job to restrict where they run by labels, e.g. "unix && somecomponentsystem && somedatabase", and aren't fussed whether that'd provided by a static slave node (e.g. physical or virtual machine) or a docker container - all they care about is that it'll do shell script commands and has "some component system" & "some database" installed and running on it.
These folk generally don't want to have to know the difference between a container that connects via SSH vs a container that connects via JNLP 😉

In my experience, the folks who want to control every aspect of a docker-based build are generally ones who're running "docker run ..." commands from their build, so all they need is a Jenkins slave node that can run unix shell scripts, has the docker client installed, and has (exclusive) access to a docker daemon, as they'll be controlling the docker containers they're interested in through the docker client rather than through Jenkins.
Putting knowledge of what docker image a build needs into the build itself is a level of knowledge that DockerCloud allows you to abstract away and, for my devs, DockerNodeStep requires too much Jenkins-specific information they don't want in their source code.

i.e. my devs would consider DockerNodeStep to be step backwards, forcing them to get involved in things they don't care about.

FYI to put this in context: We also make use of the vSphere cloud plugin and the OpenStack cloud plugin to also provide other Cloud providers so we've got multiple sources of slave nodes (in addition to numerous docker clouds and static slave VMs). Our users don't want to have to care what hardware resources are used to provide them with build executors; they just want "something with X and Y available" and leave it up to the Jenkins admins to find the hardware resources etc to provide that.
I think it's unfortunate that this plugin is called the "docker plugin" instead of the "docker cloud plugin".

TL;DR: This doesn't take this plugin in a direction that I need and I suspect that it's more suited to the docker-workflow plugin's area of expertise.

but more explicitly

Thank you; that was the level of detail I needed...
So replacing the Jenkinsfile with

buildPlugin(
    jenkinsVersions: [
        null,
        '2.73.3',
        '2.89.4',
        '2.107.1',
    ],
    platforms: [
        'docker',
        'windock',
    ],
)

would probably be better in the long run, as that'll allow future PRs to add/remove individual lines instead of just replacing the only line it contains.
In fact, it may be worthwhile splitting the existing Jenkinsfile up into multiple lines first, merging that, and then doing the PR that adds the extra 4 lines for the platforms setting.
Some activity on master will at least trigger a rebuild that might clear the redness...

I am from CloudBees but do not have access

Ah... I'd hope you'd have a better idea about who to talk to about that than I would ... but I work for IBM where "internal access" doesn't necessarily translate to "better access" either - large companies have their own rules 😁

@jglick
Copy link
Member Author

jglick commented Sep 9, 2019

Most of them really don't want to have to care what docker image provides the capability that they want.

Whether they want to care or not, at some point they will need a newer version of some framework and have to pester the administrator to get it installed as an agent template, and then the update will suddenly break unrelated jobs that had not been prepared for incompatible changes in the new version. If the administrator is in constant contact with project developers, this is manageable, but I generally recommend making a Jenkinsfile be as explicit and self-contained as possible so that builds are reasonably reproducible and infrastructure changes can be done via pull request. (And where lots of projects ought to share configuration, introduce a Groovy library which can also be versioned.) Modern containerized CI systems (incl. Tekton) generally make the same choice. Jenkins of course offers an option to please everyone, and sometimes winds up pleasing no one.

a Jenkins slave node that … has (exclusive) access to a docker daemon

If you are able to set up such infrastructure, great. But it generally means you are able to provision VMs as agents; or trust everyone and are willing to configure DinD.

This doesn't take this plugin in a direction that I need

Fine enough, but again, this plugin already defines the dockerNode step. This PR is merely about providing a matching agent syntax for Declarative. (I suppose you could use agent node plus dockerNode {} inside steps in Declarative, but it would be awkward.)

I suspect that it's more suited to the docker-workflow plugin's area of expertise.

Well. The dockerNode step is closest to DockerCloud in terms of what it is physically doing—connecting to a Docker daemon and launching an agent—and it in fact uses the same internal calls to do so. It is closer to docker-workflow functions in terms of usage patterns—allowing the author of a Jenkinsfile to select a Docker image in which some build steps should run—but docker-workflow features expect to call the docker CLI from an unspecified agent with the CLI installed (and, typically, a private daemon).

(In general I am in favor of deprecating most or all of docker-workflow: certainly the withDockerContainer step, perhaps also withDockerRegistry and withDockerServer that could be handled with the more generic withCredentials, perhaps the docker tool type since the whole tool system in Jenkins is no longer recommended.)

@jglick
Copy link
Member Author

jglick commented Sep 9, 2019

I guess I should add that I personally work mainly with Jenkins on Kubernetes, so I would consider most Docker-based functionality to be borderline obsolete…but for those installations still using Docker directly, anything that can provide simple, sane, and convenient replacements for docker-workflow functionality is a step in the right direction.

@pjdarton
Copy link
Member

administrator is in constant contact with project developers

That's our situation - our Jenkins servers are dedicated to/owned by the developers.
We're not running Jenkins as an independent service catering to general members of the public - it's our Jenkins server, run by us, for us.
My aim (as a "Jenkins administrator") is to ensure that "my" devs can concentrate on writing code rather than concerning themselves with provisioning issues ... although the boundries do blur.

VMs ... DnD

Yup; that's what we do, for builds that need to do docker operations.
We do also (for speed & efficiency) have shared docker daemons, but that means everyone has to "play nice" to avoid naming collisions etc, so that's a balancing act between execution speed vs isolation (VMs take ages to spin up compared to a container).

Docker-based functionality ... obsolete

Yup; I agree. Where I work, k8s is "where things are going" and our docker usage is legacy ... but it does have its uses in simpler situations, e.g. provisioning Jenkins slave nodes "on demand".

deprecating most

Ah, so you'd like to deprecate docker-workflow and make this plugin the main docker plugin?
Ok... that wasn't the direction I expected (I thought it was the other way around), but as long as we're not going to end up with two non-deprecated ways of doing the same thing then that's ok with me.
What would not be ok with me is if folks expected me to drive that development - I expect that my devs will be on k8s before we're fully on Jenkinsfile based builds (we have hundreds of non-pipeline old-style jobs defined via the job-dsl-plugin) so my appetite for the amount of investment required for re-writing pipeline support within this plugin is minimal. My boss wants me to concentrate on doing stuff we need (and we don't need this) and I can't really offer good and well-informed opinions on issues I'm not encountering myself.
I'm not going to try to block progress (that, sadly, is how Jenkins ended up with the yet-another-docker-plugin instead of everyone concentrating efforts on this one) but it's not something I'm going to write myself.

TL;DR: If there's a real willingness (by folks other than myself) to provide top-tier, well-documented, pipeline support for docker operations within this plugin then I'm happy to cooperate ... or even grant write access so you can get on with it without bothering me at all 😉

@bverkron
Copy link

bverkron commented Sep 20, 2019

I now see where all the confusion was coming from. There are two main "paths" / approaches to communicating with Docker on a remote host...

1) Remote host with Docker Engine API exposed to Jenkins Master (no Jenkins agent required on host)
Manage Jenkins > Configure System > Cloud > Docker > Docker Agent templates

  • A label must be included in the agent template config (say dockerSlave)
  • This label must then be included in the declarative pipeline script via agent { label "dockerSlave" } or perhaps other syntaxes.
  • This does not allow for things like image 'jenkins/jnlp-slave' to be included in the declartive script as it is taken strictly from the agent template config in the UI.
  • The Docker plugin on the Jenkins master creates the container and runs the pipeline inside the container via the Docker Engine API. No jenkins save agent is required on the Docker host, but it must exists in the image.

2) Remote host with Jenkins agent and docker engine installed
A "Standard" Jenkins slave (Linux based) is setup and registered with the Jenkins Master Manage Jenkins > Manage Nodes. Docker is also installed and the Jenkins slave user is given sufficient permissions to run Docker commands locally.

  • A label must be inclued in the node config (say dockerEnabledSlave)
  • This label must then be included in the declarative pipeline script via
    agent {
      docker {
        label 'dockerEnabledSlave'
        image 'jenkins/jnlp-slave'
      }
    }
    
    ... or perhaps other syntaxes.
  • This allows for the declarative pipeline syntax mention here https://jenkins.io/doc/book/pipeline/syntax/#agent including specifiying the image via image 'jenkins/jnlp-slave' (for example) using a dockerfile, etc.
  • This method is totally unrleated to the Cloud > Docker method above.
  • It uses the underlying withDockerContainer, etc commands mentioned above (which can be seen in the console output of the job)
  • The Jenkins slave agents receives the instructions from the master as it would for any other job but runs it all through local Docker commands to create and run the job (or specified pieces of it) in the container. Presumably the jenkins slave agent is not required inside the image since all the master / slave interaction is happening via the agent installed directly on the remote host.

Lack of clarity in documentation, tutorials, and even stackexchange posts clamining remote hosts w/declarative pipelines were simply not possible causes us to mix the above methods generating some of our problems above (essentially it was attempting docker in docker inadvertantly, I think).

Hopefully this clarification is helpful to others. I've seen this struggle outlined many places online.

@jglick
Copy link
Member Author

jglick commented Sep 20, 2019

This does not allow for things like image 'jenkins/jnlp-slave' to be included in the declarative script

And that is where dockerNode comes in: it works basically like the first option, except

  • the script, not Jenkins global configuration, specifies the agent image to use
  • the Jenkins queue is bypassed, which speeds up agent allocation, reduces the incidental complexity of the system, and allows the agent launch information to be streamed to the build log

@pjdarton pjdarton added the enhancement A PR providing an enhancement to existing functionality. label Mar 11, 2020
@solvingj
Copy link

Not sure if this is even still an active project but an additional shortcoming of the existing docker plugin has come up and so I thought I'd point it out here for future design of this plugin.

  1. I want the docker container to run with custom user
  2. I want the shell to run in a custom working directory
  3. I want to opt-out of the mounting of the Jenkins volume as the jenkins user

The current plugin makes multiple things in this equation difficult. It's like... really really ugly.

I could understand if forcing jenkins user and mounting the workspace seems like a requirement, and not doing so turns out to be impossible. But, I believe all of the above requirements are achievable, and are numerous cloud CI's work this way (travis/appveyor/etc).

It might be something fairly complex like... start the container as jenkins in custom working directory, install the agent files, commit the layer, setup auto registration and autostart, then stop the container, then run the container with the custom user and working directory as usual. There's probably a better idea, it's just an example.

@jglick
Copy link
Member Author

jglick commented Dec 14, 2020

shortcoming of the existing docker plugin

Unclear if this comment refers to this plugin or docker-workflow-plugin, specifically the withDockerContainer step.

@solvingj
Copy link

Sorry, I meant "this" to be "as you move forward with dockerNode " feature (whatever plugin that might live in). By the "shortcomings of the existing plugin", I meant docker-workflow-plugin with docker.inside(), withContainer, and withRun.

@pjdarton pjdarton mentioned this pull request Jun 7, 2022
6 tasks
@jglick jglick requested a review from a team as a code owner May 23, 2023 16:20
Copy link
Member

@basil basil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess?

@basil
Copy link
Member

basil commented May 23, 2023

Looks fine overall; can you please either resolve or suppress as false positives the security scan warnings?

@jglick
Copy link
Member Author

jglick commented May 23, 2023

I am not actually sure how to suppress false positives here. https://groups.google.com/g/jenkinsci-dev/c/OMe_zN8-Tkc/m/Nnqv14sbBAAJ by @daniel-beck says @yaroslavafenkin allowed @SuppressWarnings to work, but what would the annotation value be?

The detailed finding descriptions on the GitHub UI explain how to use these to suppress specific findings

sounds like it applies to something visible to a repository owner, but apparently not a contributor.

Note that the flagged code is basically just copied from code already in the repository, which I suppose is also flagged somewhere but not “new”.

@basil
Copy link
Member

basil commented May 23, 2023

You can suppress them with @SuppressWarnings("lgtm[jenkins/no-permission-check]") and/or @SuppressWarnings("lgtm[jenkins/csrf]").

Copy link
Member

@basil basil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

@basil basil merged commit 5756067 into jenkinsci:master May 24, 2023
@jglick jglick deleted the declarative branch May 24, 2023 11:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement A PR providing an enhancement to existing functionality.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants