-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[JENKINS-48050] Declarative Pipeline support for dockerNode #681
Conversation
…image argument to reasonable values.
" stages {\n" + | ||
" stage('whatever') {\n" + | ||
" steps {\n" + | ||
" sh 'java -version'\n" + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the basic usage. Pretty straightforward in basic cases: just replace docker
with dockerContainer
and test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems reasonable at first glance?
sounds good. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
CI tests run seems fishy though. I do see the new tests skipped on Windows, which is expected, but then I do not see them either under any Linux flavor. Going to double-check and run this locally too. |
OK, my bad 🤦♂️, didn't click "Show more", and there're too many tests to make it practical to look for a specific one. So 👍, 🚢 🇮🇹 ! :-) |
I've been doing some housekeeping and noticed this PR was here with lots of "approved" notes but wasn't merged by @ndeloof at the time. My knowledge of pipelines is limited; my knowledge of how to write code to support them is zero; I doubt I can tell the difference between good & bad code here ... hence this message is basically here to ask you all "WTF?" 😉 I did spot that there's some I also have concerns about documentation. Lastly, it seems to me that lots of Jenkins users can't tell the difference between the different docker plugins and end up raising bugs here when they're using the docker pipeline plugin instead ... and I believe that this plugin is more "legacy" than the pipeline one. Note 1: If you want to solve the "too many docker plugins" issue and have a general-purpose high-quality solution, you should also check out https://github.com/KostyaSha/yet-another-docker-plugin - in an ideal world, this plugin (which KostyaSha created) and yet-another-docker-plugin would merge... Note 2: Re skipping tests on Windows - historically, that's down to the ci.jenkins.io Windows slave VMs not supporting docker - new Windows OSs (can) have docker support, and I'm seeing issues raised by folks expecting this plugin to "just work" on Windows, so it would be nice to be able to test on both... although, personally, I have no experience of docker on windows. |
Sure. Bad code can (usually) be improved, and I am willing to put some time into polishing this up if it has a chance of being merged. I think the bigger question is whether it is a good idea to do this, whether the symbol is sufficiently self-explanatory, etc. Would appreciate any opinion from people maintaining Pipeline features on a day-to-day basis besides @abayer: @dwnusbaum, @bitwiseman, etc.
The two you linked to are RFEs, basically—additional options that could be added compatibly.
JENKINS-58223 unfortunately means this is a low bar!
Yes, this is a serious concern, specifically explaining the difference between this and
It already has Pipeline support. This PR is merely adding Declarative syntactic sugar for an existing Scripted-oriented feature: the
As of a few weeks ago, there is a platforms: ['docker', 'windock'] which I cannot (fruitfully) do since I lack write permissions to this repository, so someone with that permission would need to first file a PR making such a change and verifying that it does not break anything, and then merge it, before a contributor PR like this one could even attempt to take advantage of that label. |
Even if it's obvious, it should state it explicitly. ...but, before it's worth doing any of that, I think we first need to work out what strategic direction would be best for Jenkins as a whole - we don't want to replace one non-ideal solution with another one we'll regret :-)
My understanding is that the "pipeline support" within this plugin is minimal and buggy. e.g. the temporary templates it defines conflict with template container-limits because they're indexed by image name not the actual template (because templates don't have an id). ...and, given that this plugin doesn't have any maintainers both able and willing to fix this, I don't see that changing unless someone else who does want that kind of thing added is prepared to drive it forwards. FYI I'm only here reluctantly as Nick "ran off" ,leaving me in charge. I'm not using docker pipeline functionality where I work, so I can't spend much time on it (I try to limit my time here to just basic maintenance, merging PRs that I'm 100% sure about + any urgent security fixes) rather than major development.
If you can tell me exactly what changes to the (currently one-line long) Jenkinsfile are needed, I could raise one containing those changes ... however, I suspect ci.jenkins.io is unwell right now, as #748 has yet to be noticed and built despite a few hours elapsing. To be honest, I think the best way of getting Windows unit-tests going would be for someone from Cloudbees (with access to the ci.jenkins.co internals) to drive that forwards, as it's likely to require access that non-cloudbees folks don't have. |
I do not know all that much about In a sense,
Mentioned above, but more explicitly: -buildPlugin(jenkinsVersions: [null, '2.73.3', '2.89.4', '2.107.1'])
+buildPlugin(jenkinsVersions: [null, '2.73.3', '2.89.4', '2.107.1'], platforms: ['docker', 'windock'])
Quite possibly, which is why you would make such a change on a PR, to see what happens.
Well I am from CloudBees but do not have access to ci.jenkins.io internals, nor does my local Windows 10 installation have Docker (inside VirtualBox and not sure how to install it), so as in jenkinsci/log-cli-plugin#15 I have just tried doing stuff on the server. Probably not good enough for serious development of Windows-specific code, if some is needed. |
Where I work, the former is preferred by the majority of my developers, i.e. Most of them really don't want to have to care what docker image provides the capability that they want. In my experience, the folks who want to control every aspect of a docker-based build are generally ones who're running "docker run ..." commands from their build, so all they need is a Jenkins slave node that can run unix shell scripts, has the docker client installed, and has (exclusive) access to a docker daemon, as they'll be controlling the docker containers they're interested in through the docker client rather than through Jenkins. i.e. my devs would consider FYI to put this in context: We also make use of the vSphere cloud plugin and the OpenStack cloud plugin to also provide other TL;DR: This doesn't take this plugin in a direction that I need and I suspect that it's more suited to the docker-workflow plugin's area of expertise.
Thank you; that was the level of detail I needed...
would probably be better in the long run, as that'll allow future PRs to add/remove individual lines instead of just replacing the only line it contains.
Ah... I'd hope you'd have a better idea about who to talk to about that than I would ... but I work for IBM where "internal access" doesn't necessarily translate to "better access" either - large companies have their own rules 😁 |
Whether they want to care or not, at some point they will need a newer version of some framework and have to pester the administrator to get it installed as an agent template, and then the update will suddenly break unrelated jobs that had not been prepared for incompatible changes in the new version. If the administrator is in constant contact with project developers, this is manageable, but I generally recommend making a
If you are able to set up such infrastructure, great. But it generally means you are able to provision VMs as agents; or trust everyone and are willing to configure DinD.
Fine enough, but again, this plugin already defines the
Well. The (In general I am in favor of deprecating most or all of |
I guess I should add that I personally work mainly with Jenkins on Kubernetes, so I would consider most Docker-based functionality to be borderline obsolete…but for those installations still using Docker directly, anything that can provide simple, sane, and convenient replacements for |
That's our situation - our Jenkins servers are dedicated to/owned by the developers.
Yup; that's what we do, for builds that need to do docker operations.
Yup; I agree. Where I work, k8s is "where things are going" and our docker usage is legacy ... but it does have its uses in simpler situations, e.g. provisioning Jenkins slave nodes "on demand".
Ah, so you'd like to deprecate TL;DR: If there's a real willingness (by folks other than myself) to provide top-tier, well-documented, pipeline support for docker operations within this plugin then I'm happy to cooperate ... or even grant write access so you can get on with it without bothering me at all 😉 |
I now see where all the confusion was coming from. There are two main "paths" / approaches to communicating with Docker on a remote host... 1) Remote host with Docker Engine API exposed to Jenkins Master (no Jenkins agent required on host)
2) Remote host with Jenkins agent and docker engine installed
Lack of clarity in documentation, tutorials, and even stackexchange posts clamining remote hosts w/declarative pipelines were simply not possible causes us to mix the above methods generating some of our problems above (essentially it was attempting docker in docker inadvertantly, I think). Hopefully this clarification is helpful to others. I've seen this struggle outlined many places online. |
And that is where
|
Not sure if this is even still an active project but an additional shortcoming of the existing docker plugin has come up and so I thought I'd point it out here for future design of this plugin.
The current plugin makes multiple things in this equation difficult. It's like... really really ugly. I could understand if forcing jenkins user and mounting the workspace seems like a requirement, and not doing so turns out to be impossible. But, I believe all of the above requirements are achievable, and are numerous cloud CI's work this way (travis/appveyor/etc). It might be something fairly complex like... start the container as jenkins in custom working directory, install the agent files, commit the layer, setup auto registration and autostart, then stop the container, then run the container with the custom user and working directory as usual. There's probably a better idea, it's just an example. |
Unclear if this comment refers to this plugin or |
…nto declarative
Sorry, I meant "this" to be "as you move forward with |
…nto declarative
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess?
…e-model-definition` lack it) for SpotBugs jenkinsci#681 (comment)
Looks fine overall; can you please either resolve or suppress as false positives the security scan warnings? |
I am not actually sure how to suppress false positives here. https://groups.google.com/g/jenkinsci-dev/c/OMe_zN8-Tkc/m/Nnqv14sbBAAJ by @daniel-beck says @yaroslavafenkin allowed
sounds like it applies to something visible to a repository owner, but apparently not a contributor. Note that the flagged code is basically just copied from code already in the repository, which I suppose is also flagged somewhere but not “new”. |
You can suppress them with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
Implementation of something akin to JENKINS-48050 but much less ambitious in scope than jenkinsci/pipeline-model-definition-plugin#255 + this patch. Rather than pretending to be a compatible variant of the existing
docker
agent type, it just introduces a new one. My reasoning is thatwithDockerContainer
(for whichagent {docker …}
is sugar) is deeply flawed and its specification is essentially the implementation—for some cases it works, for other cases it does not and cannot.dockerRun
is a totally different approach with its own set of tradeoffs. For example,dockerRun
requires that the image contain a JVM, and uses the equivalent ofdocker-run
from a Java API call made by the master, and keeps the workspace entirely private to the container which also includes the agent JVM;withDockerContainer
could use a non-Java-related image, it usesdocker-exec
with a CLI run on some agent launched by another technique, it requires that some physical agent have access to a Docker server, it uses workspace filesystem mounts with specific Unix permission issues, it imposes special restrictions onENTRYPOINT
s. The list of differences is so long that I cannot imagine any practical scenario in which a user would wish to “transparently” switch from one “implementation” to another merely by installing a new plugin and perhaps flipping some global switch. Same for thekubernetes
plugin—this is just a different world, and if you wish to use that technology, editing theagent
line in yourJenkinsfile
is only the first step.No attempt yet to allow customization of server or registry credentials, etc. In fact the
dockerNode
step does not currently support a custom registry at all. (Note that the unfiled patch pretends to accept registry credentials and then pass them to the existingdockerNode
credentials argument, but this is wrong: the existing argument is server (dockerd
) credentials, not even of the same physical type!) Anyway such additions could be done easily as compatible changes; this PR is here to get a stake in the ground. Miscellaneous requirements listed in JENKINS-48050 such asconfig-file-provider
support are really just requests for tests verifying that everything “just works”, which is far more likely fordockerRun
than forwithDockerContainer
since once the agent is started, you are doing nothing out of the ordinary, and most of that verification is not specific to Declarative either.I just picked a symbol for the
agent
type. I do not have a strong opinion. Ideally it would be something that contrasts clearly with the existingdocker
type, which we ought to deprecate as hopeless.@batmat @rtyler @ndeloof